posted 13 days ago on techdirt
I don't get to use the phrase "with alacrity" that often, but Baton Rouge store owner Abdullah Muflahi's filing of a lawsuit against the Baton Rouge police can only be described as that. Following the shooting of Alton Sterling by Baton Rouge police officers, Muflahi's store was raided by law enforcement officers who took the hard drive containing the store's surveillance camera footage of the altercation. So far, everyone involved has refused to discuss the illegal seizure of Muflahi's recording equipment, deferring to the FBI and its investigation of the shooting -- which would be something if the FBI would answer questions about the seizure and current location of the hard drive.. but it won't talk about it either. Hence the speedily-filed lawsuit by Muflahi, as reported by Mike Hayes of Buzzfeed: The owner of the Triple S Food Mart in Baton Rouge where Alton Sterling was fatally shot on July 5 says police detained him for hours while seizing his security footage of the incident without a warrant, according to a lawsuit [PDF] filed Monday. 28-year-old Abdullah Muflahi says that police at the scene placed him in a locked police car for four hours and denied him access to his cell phone, preventing him from contacting his family or an attorney. According to the lawsuit, police wouldn't even allow Muflahi to go back into his store to use the restroom during his detention, forcing him to urinate outside of his store in full view of the public. And his detention didn't end there. Muflahi was taken back to the Louisiana State Police headquarters and held for another two hours while officers questioned him. This all sounds very suspicious, illegal, and retaliatory. Muflahi not only had CCTV footage of the shooting, but also filmed it with his own cell phone, providing one of the two "unofficial" accounts of the arrest. While it's fantastic that a recent Supreme Court decision may have resulted in officers' reluctance to seize/search Muflahi's cell phone, the Fourth Amendment itself seemed to have little effect on their decision to enter his store and seize his recording equipment without a warrant. While the recording could correctly be described as "evidence," that doesn't excuse a warrantless entry or seizure. The lawsuit, unfortunately, is a little thin when it comes to establishing anything that might overcome the immunity that shields individual officers from the consequences of their actions. While it does suggest the Baton Rouge Police Department's training is inadequate, it really doesn't go into detail as to why the court should be expected to believe this assertion. However, it does make an allegation that could be interesting if the court decides to explore it. [Baton Rouge Police Chief Carl Dabadie] has negotiated a contract with a union representing police officers that provides a blanket indemnification for police officers who are sued by the public from all claims no matter what the circumstances under which the claim arise and further provides that meritorious complaints about police officers are purged from employment files after only 18 months. Both contract provisions encourage aggressive conduct by police officers by minimizing consequences. It's common knowledge that police union contracts are generally constructed to shield officers from not only public scrutiny, but internal misconduct investigations as well. Most of these are complemented by a "Law Enforcement Bill of Rights" that gives officers up to three days to ignore questions about alleged misconduct or excessive force. These "extra rights" are often granted in the face of police union pressure, and the unions themselves are heavily-involved in the drafting of department discipline policies. Unions also help fired officers regain their positions, making it even harder for law enforcement agencies to rid themselves of the "bad apples" continually spoiling the rest of the "bunch." While there's zero chance any decision would result in an alteration of the union's relationship with the Baton Rouge police department or the policies it helped draft, any discussion would at least shine a little more light on how these unions tend to make bad policing/policies even worse. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
If you spend any time online, you've by now noticed that the internet this week belched forth a tidal wave of incessant chatter over Pokemon Go, Nintendo's new augmented reality game involving scrambling around real-world locations to "catch" collectible, virtual beasts with your phone. The game is by any standard a smashing success, boosting Nintendo's market cap by an estimated $9 billion in two days with the app rocketing to the top of both major app stores. The phenomenon is, frankly, pretty amazing:Pokemon GO is just insane right now. This is in Central Park. It's basically been HQ for Pokemon GO. pic.twitter.com/3v2VfEHzNA — Jonathan Perez (@IGIhosT) July 11, 2016 As with any massive phenomenon involving tech many people don't really understand (augmented reality in this case), the news wires immediately lit up with all manner of hysteria over the game's impact on the real world, with much of this impact wholly imagined as sites rushed to pursue search trends and ad eyeballs. The media being, well, the media, one hoax website was able to get countless news outlets to parrot all manner of fake stories about Pokemon Go, from claims that brothers were killing brothers to reports that major traffic accidents were being caused by players running out into the middle of traffic to collect creatures that technically don't exist. An ouroboros of phantoms chasing phantoms. The media also stumbled all over itself to pounce on claims that the Pokemon Go app was a privacy nightmare, busily reading your e-mail and digging through an ocean of personal data that would any second now be in the hands of nefarious hackers. Most of these reports had to be subsequently walked back with updates after analysts actually bothered to study the app and reporters started (gasp) actually asking questions about just what the app was really doing:"But in a call with Gizmodo, Reeve backtracked his claims, saying he wasn’t “100 percent sure” his blog post was true. On the call, Reeve also admitted that he had never built an application that uses Google account permissions, and had never tested the claims he makes in the post. Cybersecurity expert and CEO of Trail of Bits Dan Guido has also cast serious doubt on Reeve’s claim, saying Google tech support told him “full account access” does not mean a third party can read or send or send email, access your files or anything else Reeve claimed. It means Niantic can only read biographical information like email address and phone number."While the app did appear to be asking for broader Google account permissions than was necessary (on iOS and less frequently on Android), both Google and app-maker Niantic issued a statement noting this was a bug they're busy fixing and that no personal information had actually been accessed:"We recently discovered that the Pokémon Go account creation process on iOS erroneously requests full access permission for the user's Google account. However, Pokémon Go only accesses basic Google profile information (specifically, your user ID and e-mail address) and no other Google account information is or has been accessed or collected. Once we became aware of this error, we began working on a client-side fix to request permission for only basic Google account information, in line with the data we actually access. Google has verified that no other information has been received or accessed by Pokémon Go or Niantic. Google will soon reduce Pokémon Go's permission to only the basic profile data that Pokémon Go needs, and users do not need to take any actions themselves."And while a bug that gives broader permissions than necessary is bad, it was far from the "hacker's dream" and "privacy trainwreck" portrayed by dozens upon dozens of different outlets. Meanwhile, most of the data being collected is a fraction of the data being hoovered up and sold daily by your wireless carrier, something routinely forgotten by those laboring under the illusion that privacy in the cellular era still actually exists. None of this is to say that many of the stories bubbling up amidst the Pokemon Go chaos aren't incredibly interesting. Watching police having to remind players that the laws of the state (and of reality) still apply while playing the game has proven pretty fascinating. Interesting too are conversations about whether African Americans and Muslim Americans will have a decidedly different and potentially unpleasant experience playing the game in the land of shoot first, think later law enforcement. But the most interesting story remains the meta narrative of a press so focused on profitability and being first that it couldn't give a flying Aerodactyl about actually being right.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
If you're curious about all of the new VR tech and apps coming out, the $19 Virtual Reality Box Headset is a good place to start that won't break the bank. The headset works with smartphones that have 4.7"-6" displays. It has optical axis sliding control to adjust sight distance to provide a satisfying viewing experience for all. The T-shaped headband is fully adjustable to ensure a comfortable fit and the high quality lens technology keeps images pure and distortion-free so you can better immerse yourself in the virtual world. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
We've been writing for a while now how the FBI has been trying to rewrite a key part of the PATRIOT Act to massively expand its ability to use National Security Letters (NSLs) to get email and browser information with no warrant and no oversight. Despite the fact that the FBI was asking for this just days before the Orlando shooting, right after it, a bunch of Senators, led by John McCain, used the opportunity to fast track that legislative change, cynically pointing to the Orlando shooting as a reason why it's needed (despite it having nothing whatsoever to do with that). That effort failed, but just barely -- and it's expected to be brought up again shortly for another vote. Senators Ron Wyden and Martin Heinrich are trying to convince people that this is a bad, bad idea. They've written a short but compelling article on how this is a massive abuse of privacy, and why the FBI absolutely does not need this power. Given what web browsing history can reveal, there is little information that could be more intimate. If you know that a person is visiting the website of a mental health professional, or a substance-abuse support group, or a particular political organization, or a particular dating site, you know a tremendous amount of private and personal information about him or her. That’s what you get when you can get access to their web browsing history without a court order.

 They note that there are real threats, but this change in the law won't help to stop those threats. Indeed, the FBI already has the ability to get this information -- it just needs to submit to a tiny bit of oversight: But the FBI already has at least two separate ways they can quickly obtain these electronic records with court oversight.
 
First, under the Patriot Act’s section 215, the FBI can get a court order from the Foreign Intelligence Surveillance Court to obtain a suspect’s electronic records. The president’s surveillance review group, which included former top intelligence officials, said this kind of court oversight should be required for this kind of information.
 
Second, in emergency situations where the FBI believes it needs to move immediately, it already has the authority to get these records first, and then settle up with the court afterward. This authority comes from section 102 of the USA FREEDOM Act, which is based on language Sen. Wyden authored and we both strongly supported.
 
 This effort to expand the FBI's surveillance powers should be a non-starter and it's depressing that so many Senators are willing to grant the FBI near total freedom to spy on our electronic records without a warrant. Given the FBI's history of abusing its surveillance powers, sometimes for political gain, shouldn't Congress be restricting such powers, rather than expanding them? Stay tuned, later today, on the Techdirt Podcast, we'll have Senator Wyden discussing why this is so problematic.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
This is becoming quite the stupid trend: people who are true victims of terrorist attacks suing internet platforms because terror-associated groups are using those platforms generally. It began back in January, when a woman sued Twitter after her husband was apparently killed in an ISIS attack. The lawsuit made no connection between the use of Twitter and the attack. It's just "husband died in ISIS attack" and "ISIS people use Twitter." The judge in that case is not at all impressed and it seems likely to dismiss the case shortly. In the meantime, another similar case was filed against Twitter, Facebook and Google. And now... we've got a third such case filed against Facebook and asking for a billion dollars. A billion dollars. The lawsuit was filed by the families of some people who were killed in a Hamas attack. And the entire complaint is basically "Hamas killed these guys, Hamas uses Facebook, give us a billion dollars." It goes through a variety of stories, each involving Hamas or Hamas-affiliated attacks, without any actual connection to Facebook, other than "and they also used Facebook to celebrate." Here's just one example of a bunch: Yes, the situation is horrifying and awful. No doubt about that. But blaming Facebook for it is idiotic... and also likely to go absolutely nowhere. Facebook is clearly protected by Section 230 of the CDA and it would be amazing if a court didn't toss this lawsuit very quickly. And, yes, obviously it's absolutely horrible if your family member is killed in a terrorist event. I'm sure I'd be distraught and angry and many other feelings that I can't begin to imagine. But lashing out at various neutral social media platforms is just ridiculous. It stinks of being a Steve Dallas lawsuit in which lawyers decide to sue tangentially related companies because that's where the money is. Meanwhile, Hamas is already claiming that this lawsuit is proof that the US is fighting against "freedom of the press and expression." Of course, that assumes that the lawsuit will actually go anywhere, which seems ridiculously unlikely. Terrorist attacks are a real problem. Suing Facebook or other social media platforms isn't going to help one bit.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Tell me if you've heard this one before: broadband carriers are once again claiming that if regulators pass net neutrality rules, their ability to invest in next-generation networks will be somehow be severely hindered, causing no limit of damage to consumers, puppies, and the time-space continuum. That's basically the line U.S. broadband providers tried to feed the FCC in the States. But no matter how many industry-tried, cherry picking think tank studies have tried to claim that net neutrality hurts broadband investment, real world data and ongoing deployment show that just isn't true. As we noted last October, Europe passed net neutrality rules that not only don't really protect net neutrality, but actually give ISPs across the EU's 28 member countries the green light to violate net neutrality consistently -- just as long as ISPs provide a few flimsy, faux-technical justifications. The rules are so filled with loopholes as to be useless, and while they technically took effect on April 30, the European Union's Body of European Regulators of Electronic Communications (BEREC) has been cooking up new guidelines to help European countries interpret and adopt the new rules. With BEREC's public comment period set to end on July 18, European net neutrality advocates are giving it one last shot to toughen up the shoddy rules. Fearing they might succeed, a coalition of twenty European telcos (and the hardware vendors that feed at their collective trough) have taped together something they're calling their "5G Manifesto," (pdf) which trots out some pretty familiar fear mongering for those who've remotely followed the last fifteen years of net neutrality debate. Among them is the continued, not so veiled threat that technological progress will stop dead in its tracks if these companies don't get the kind of consumer net neutrality protections they want (namely, none):"The EU and Member States must reconcile the need for Open Internet with pragmatic rules that foster innovation. The telecom Industry warns that the current Net Neutrality guidelines, as put forward by BEREC, create significant uncertainties around 5G return on investment. Investments are therefore likely to be delayed unless regulators take a positive stance on innovation and stick to it."And the threat doesn't just involve next-gen wireless. The carriers also proceed to effectively argue that unless they're allowed to include huge gaping loopholes (like the existing exemption of "specialized services"), other technologies like VR, smart cars and smart cities will all be hurt (much like ISPs here in the States tried to argue that net neutrality rules would somehow hurt medical technology unless ISPs were allowed to discriminate):"In this context we must highlight the danger of restrictive Net Neutrality rules, in the context of 5G technologies, business applications and beyond. 5G introduces the concept of “Network Slicing” to accommodate a wide-variety of industry verticals’ business models on a common platform, at scale and with services guarantees. Automated driving, smart grid control, virtual reality and public safety services are examples of usecases with distinguished characteristics which call for a flexible and elastic configuration of resources in networks and platforms, on a continuous basis, depending on demand, context and the nature of the service."This is all, for lack of a more scientific term, unequivocal and total crap. The argument that "net neutrality rules will stop us from keeping your pace maker from working" is fear-based prattle with no foundation in reality. If anything, the EU's rules go well out of their way to ensure traffic can be treated differently (to an extreme fault). As for 5G, these upgrades are a necessary part of doing business, and carriers will invest in networks whether or not there's some flimsy net neutrality rules governing their behavior. Realize too that the "manifesto" is talking about rules as currently written that effectively say it's ok to violate net neutrality provided you support your anti-competitive behavior in veiled, faux technical justifications (see comments made by Sir Tim Berners-Lee). In short, people should understand these European companies' lawyers and lobbyists directly wrote net neutrality rules pretty much ensuring they can do whatever they like -- about as "certain" as things are going to get -- yet they're still god-damned complaining. When it isn't busy making empty threats, the manifesto trots out some similarly-meaningless promises, such as claims that the "right" net neutrality rules will result in scheduled large-scale 5G demonstrations by 2018, and the launch of 5G commercially in at least one city in every EU country by 2020. Again though, this was already happening with or without net neutrality rules. Tying the success or failure of network investment to net neutrality is a hollow bogeyman, one we've seen used repeatedly in countries where carrier executives twitch at the faintest specter of a regulator actually doing its job and protecting consumers from the aggressive abuse of uncompetitive telecom markets.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
For some reason, gamemaker Blizzard has been totally smitten with the idea of twisting copyright law into an ugly pretzel to sue anyone who makes a hack or cheat for one of its games for some time now. They did this concerning Starcraft, then World of Warcraft, and then Starcraft 2. This lawsuit tactic is starting to become something of a right of passage for Blizzard's games, but the tactic in question makes little sense. Blizzard's argument can be roughly translated as: cheats and hacks break the EULA for the game, the game is licensed by the EULA instead of being owned by anyone paying for it, the game does regular copying of code and files while in use, therefore a hack or cheat that breaks the EULA renders all of that routine copying as copyright infringement. While this wrenching of copyright into these kinds of lawsuits has nothing to do with the actual purpose or general application of copyright law, many cheer these moves on, because cheaters within the communal games we play are annoying. But the ends don't justify the means, and this kind of twisting of copyright law is dangerous, as we've pointed out in the past. Not that that's stopped Blizzard from utilizing this tactic, of course. In fact, recent Blizzard success Overwatch has become the latest to achieve this right of passage. While most Overwatch players stick to the rules, there’s also a small group that tries to game the system. By using cheats such as the Watchover Tyrant, they play with an advantage over regular users. Blizzard is not happy with the Overwatch cheat and has filed a lawsuit against the German maker, Bossland GMBH, at a federal court in California. Bossland also sellscheats for various other titles such as World of Warcraft, Diablo 3 and Heroes of the Storm, which are mentioned in the complaint as well. The game developer accuses the cheat maker of various forms of copyright infringement, unfair competition, and violating the DMCA’s anti-circumvention provision. According to Blizzard these bots and cheats also cause millions of dollars in lost sales, as they ruin the games for many legitimate players. And it might indeed be true that these cheat hacks piss off some Overwatch gamers and might even drive some of them away from the game, costing Blizzard revenue. But, and I cannot stress this enough, that doesn't suddenly make any of this copyright infringement. To see what lengths Blizzard's legal team is going to in order to twist this all together, one need look only at the claims the filing makes. First, it claims that Bossland is committing contributory infringement by offering the hack, because the hack breaks the EULA, which makes accessing the game suddenly fraudulent, and all the routine copying the game does becomes copyright infringement. This, again, relies on the idea that the game is licensed rather than bought, and that breaking the EULA renders the license invalid. This has never been the way copyright has worked in the past. Second, the filing claims that the hack's ability to provide a graphical overlay over the regular game is the creation of a derivative work, which is also copyright infringement. Except the overlay isn't copying any part of the game, nor is it making works expanding on the game. It's just an overlay, or a HUD. Only then does the filing accuse Bossland of contractual interference, which is probably the most sound charge in the whole thing. Even then, hacks and cheats have long been a staple of the video game ecosystem, with most gamemakers embracing modding communities, and even embedding cheats within their own games. This has changed somewhat with the rise of online multiplayer games, where these kinds of cheats break the game in some ways, but still, entering into a legal challenge over all of this instead of jumping back into the fray of game development to try to keep the cheaters out seems strange. And filing all of this in a California court has pretty much everyone, including the folks at Bossland, scratching their heads. TF spoke with Bossland CEO Zwetan Letschew, who informed us that his company hasn’t received the complaint at its office yet. However, they are no stranger to Blizzard’s legal actions. “There are over 10 ongoing legal battles in Germany already,” Letschew says, noting that it’s strange that Blizzard decided to take action in the US after all these years. “Now Blizzard wants to try it in the US too. One could ask himself, why now and not back in 2011. Why did Rod Rigole [Blizzard Deputy General Counsel] even bother to fly to Munich and drive with two other lawyers 380 km to Zwickau. Why not just sue us in the US five years ago?” While Letschew still isn’t convinced that the lawsuit is even real, he doesn’t fear any legal action in the U.S. According to the CEO, a California court has no jurisdiction over his company, as it has no ties with the United States. It should be noted that much of the time these legal attempts by Blizzard don't result in wins for its legal team. And that's not even taking into account the questions of jurisdiction and/or what a California court ruling will result in for a company abroad. I'm a little lost as to why Blizzard is even bothering with this, to be honest. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
When it comes to biometrics, you really can't beat DNA. You can always erase your fingerprints, or wear contact lenses to fool iris scanners, but there's no way of changing all your DNA enough to make it unrecognizable (even with the new CRISPR technique). Couple that with the fact that we are shedding DNA everywhere we go -- leaving tell-tale markers on everything we touch -- and you have the perfect surveillance mechanism. That's why earlier UK plans to give police access to medical databases are problematic, to say nothing of Kuwait's mandatory DNA database for all citizens, residents and visitors. Now Rick Falkvinge has written a post about troubling moves in Sweden: Since 1975, Sweden has taken a DNA sample from all newborns for medical research purposes, and asked parents’ consent to do so for this research purpose. This means that over time, Sweden has built the world's most comprehensive DNA database over everybody under 43 years of age. But now, politicians are considering opening up this research-only DNA database to law enforcement and private insurance companies. As Falkvinge points out, this is not just a betrayal of a trust, it is totally counterproductive: This is, of course, an outrageous and audacious breach of contract with the parents who were promised the sample would be used only for the good of humanity in terms of medical research. The instant there's a mere suspicion that this will be used against the sampled newborn in the future -- as is the case now -- instead of being used for the good of humanity as a whole, people won't provide the DNA database with more samples, or at least not enough samples to provide researchable coverage. The risk that Sweden might proceed down this road is also a reminder that once such huge databases are created, it is almost inevitable that one day someone will come along and say: "since we have this information, surely nobody could object to it being used to catch terrorists/pedophiles/rapists etc. etc." And as the news from Sweden shows, initial promises that such sensitive data will only be used for research are worthless, since they can always be revoked later on, and there is no easy way of removing the data once it is on the database. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
One of the stranger stories from the past year or so in the social media space was the saga of Politwoops, the service that archives politicians' and public officials' tweets that have since been deleted. Politwoops had operated for a time before Twitter killed it off, claiming it violated its ToS. Twitter also claimed that the reason it had ended Politwoops' ability to operate was to protect the privacy of its users, including the public officials that were the target of Politwoops. Then Twitter suddenly allowed Politwoops to make its return, saying: “We need to make sure we are serving all these organizations and developers in the best way, because that is what will make Twitter great. We need to listen, we need to learn, and we need to have this conversation with you. We want to start that today.” Basically, while Twitter still insists it's all about protecting the privacy of users of its service, it has now carved out a special place for public officials and aspiring politicians in which the archiving of deleted tweets is acceptable. It's a strange kind of reverse case in which being a notable public official suddenly affords less privilege, rather than more. And, while it's great that Politwoops has returned, the move left everyone uncertain as to exactly how Twitter would apply its user-privacy standard moving forward. Perhaps now things are a bit more clear, however, as a similar service, PostGhost, has now been shut down over the same ToS issues and user privacy excuses that initially doomed Politwoops. PostGhost, which had just launched this week, kept copies of tweets sent by verified users with more than 10,000 followers. In Twitter's letter, posted by PostGhost, the company said that recording deleted tweets was a violation of the service's terms. PostGhost agreed to comply and shut down, but in a lengthy response, argued that such users are "public figures" that should have their tweets recorded. "We believe that for such prominent verified Twitter users, the public has a right to see their public Twitter history, whether or not they grow to regret the statements they've made," PostGhost's statement reads. Politwoops, meanwhile, remains up and active. So, it seems that notoriety and status as a public figure are not the standard by which Twitter applies its tweet archiving rules. Instead, the space carved out for politicians and public servants appears to be a special one where the likes of celebrities and professional athletes do not operate. But if that is the line Twitter wishes to draw in its cyber-sand, it's a strange one. In areas of law, the status of public figure-hood, as opposed to public servant, is typically the standard by which all kinds of laws are applied (such as the availability to parody, applicability of defamation laws, etc.). And there's good reason for this: the goal is to foster conversation and knowledge that is in the public interest. The public's interest need not be confined to politics, thankfully, yet Twitter's choices appear to reserve separate rules for the political class. I can understand why Twitter might think this makes sense. After all, I find the drunken midnight thoughts of senators far more compelling than those of a UFC fighter. But my interest isn't the same as the public interest. Twitter can engage in this flipflopping, of course. It's their platform, after all, and they can keep it as arbitrarily closed as they like. The question becomes whether that makes the service more or less useful for the everyday Twitter user. And that's a question that I think has an obvious answer. Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
For some time now, the opinion du jour in "enlightened" media circles has been to treat the news comment section (aka the customers who visit your website daily and directly) as some kind of irredeemable leper colony. One that should be nuked from orbit before the infection spreads. As such, we've seen website after website proudly crow about how they've given up on allowing site comments because a handful of posters are obnoxious, hateful little shits and the social media age means more direct community interaction is passe. These announcements usually come hand in hand with all manner of disingenuous platitudes from the editorial staff, like we killed comments because we wanted to "build relationships," or we muzzled our entire user base because we just "really value conversation." Usually, this is just code for websites that are too lazy and cheap to moderate, weed and cultivate their community garden, and find it convenient to argue that outsourcing discourse to the homogenized blather realm of Facebook is an improvement. Since this trend began a few years back, you'll occasionally see an editor stop and realize that these disregarded masses are, warts and all, the life blood of a community -- and preventing them from publicly interacting on site is actually a step backwards. Case in point is new New York Times public editor Liz Spayd, who this week asked a bizarre and outlandish question: what if websites were to treat these people like actual human beings and the comment section as something worth saving? Says Spayd: What The Times and most other newsrooms mostly do now is not so much listen to readers as watch and analyze them, like fish in a bowl. They view them in bulk, through statistics measuring how many millions of “unique” users clicked on content last month, or watched a video, or came to the site multiple times, or arrived through Facebook. What would prove more fruitful is for newsrooms to treat their audience like people with crucial information to convey — preferences, habits and shifting ways of consuming information. What do they like about what we do and how we do it? What do they want done differently? What do they turn to other sites for? This isn't really complicated. Spayd refreshingly realizes that the rise of the comment troll is in many ways the fault of websites themselves. Writers and editors simply don't want to cultivate real conversation, because it's hard work and their current analytical tools can't monetize discourse quality. Instead, websites have begun to approach the end user relationship like the owner of a prison colony who believes the entire sordid affair can only be improved by a good, industrialized delousing or the outsourcing to bigger, meaner prisons. In reality studies have found that comment sections can be dramatically improved -- simply by treating site visitors well and by having somebody at the website make a basic effort at fundamental human-to-human communication: One surprisingly easy thing they found that brought civil, relevant comments: the presence of a recognized reporter wading into the comments. Seventy different political posts were randomly either left to their own wild devices, engaged by an unidentified staffer from the station, or engaged by a prominent political reporter. When the reporter showed up, “incivility decreased by 17 percent and people were 15 percent more likely to use evidence in their comments on the subject matter,” according to the study. With the daily struggle to produce more and more content in a sea of more and more competitors, it's simply easier to pretend that the comment section doesn't matter. But what's being pushed as enlightened evolution by editors is just willful obliviousness driven by lazy thinkers, incapable of embracing anything that can't be clearly, graphically monetized. It's thinking built at media empires with the multi-million dollar backing of giant conglomerates, where actual human interaction is already more easily obscured by the daily shuffle of incessant bi-coastal conference calls. Since the comment section is perhaps the most valuable source of corrections, it's also a wonderful way for such giant companies to avoid advertising that their writers may have made a mistake. I've been at the heart of one smaller, community-driven website since 1999 (DSLReports.com) and a writer here at Techdirt for several years, so it's perhaps more obvious to me that scrappier upstarts don't have the luxury of telling their entire community to piss off to Twitter if they want to leave public feedback. Not too surprisingly, Spayd's idea was received poorly by some in the news media who believe public interaction with readership on site is either beneath them or wholly irrelevant in the social media era. MIT Technology Review Editor Jason Pontin was quick to declare that Spayd's comments reflected a "disastrous first outing" as the Times' new public editor, going further to suggest that anybody who gives a damn about public comments has the "wrong priorities": A disastrous first outing. Show me an editor who cares about comments, and that's someone with the wrong priorities. https://t.co/3JrFw8L9HS — Jason Pontin (@jason_pontin) July 11, 2016 Slate was also quick to deride Spayd's outlandish treatise (which again, is to simply give a damn about your on-site community) as the "phony populism" and "willfully naive" rhetoric of a bygone era: After writing that the paper is trying to move in the direction of more comments, she adds that the speed at which it has done so has been hindered by "other newsroom priorities." I’m not sure what those other priorities are, but to spend your first column focusing on something like a comments section is another sign that Spayd’s priorities are bizarre and even—this will sting—out of touch. Yes, how gauche. As we all know by now, you don't build community by treating site visitors well, you build community by telling them all to fuck off to Facebook, where their infectious, intellectual detritus can be more easily ignored.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Today's the day for bogus DMCA takedowns by clueless lawyers trying to hide embarrassing information, I guess. Earlier today we had a story about a legal exec at Sony Pictures issuing a completely bogus DMCA takedown over his salary info being included in the Sony Hacks email dump. And now we turn to Carl David Ceder, a young criminal defense lawyer in Texas. If you recognize that name, it might be because a much more well-known and established criminal defense lawyer, Scott Greenfield, wrote a few blog posts about young Carl a few years ago, when he discovered that Carl had been beefing up the content on his professional website by simply plagiarizing the content of other, more established legal bloggers, and posting it as if it were his own thoughts. To put it mildly, Carl did not respond well to this and sent a few barely comprehensible rants blaming everyone but himself, and never actually apologizing for copying someone else's content wholesale. Now, there are lots of ways to deal with this kind of thing. One could admit it was a mistake, but that doesn't seem to be in Carl David Ceder's nature. And, of course, around here, we're certainly willing to consider fair use arguments for copying material, though Carl presents none, and, indeed, it appears there's little fair use claim he could make for what he did. There's a pretty strong argument that he engaged in both plagiarism (claiming someone else's work as your own) and copyright infringement, and from his response, didn't appear to understand either issue, or why some people were concerned about it. But, today, about a year and a half after Greenfield's original post, it appears that Carl David Ceder has discovered copyright law, but for all the wrong reasons. He sent not one, not two, but three DMCA takedowns for Greenfield's original post. Here's the first one that gets some pretty basic stuff about copyright law wrong: So, let's start with the basics. Carl thinks he's found a way to get back at Scott, but he's wrong, because it appears he doesn't understand copyright law at all (given his actions earlier in copying content and then lashing out at everyone, perhaps this is not a surprise). First off, the specific copyright claim is not to Scott's overall post, but rather that Scott used Carl's awful headshot in the post, as part of his mocking of Carl. Now, there are lots of things wrong with this. First off, using an official headshot in reporting on someone is not copyright infringement. It's fair use. This is actually an issue that's come up in court multiple times, and it's always considered fair use. There was the case a few years back of a gripes site that used professional headshots and got sued for infringement. In that case, it was determined that the use of the headshots was fair use and that the lawsuit was clearly a SLAPP suit designed to silence the site. More recently, in a more political context, a judge ruled that using a political headshot on a blog post was also fair use. So, the claim of copyright infringement here is already pretty damn weak. Perhaps more importantly, as Carl David Ceder seems to directly admit in one of the DMCA takedown notices, he doesn't even hold the copyright in question. Instead, he got the photo taken at his local JC Penney photo studio (classy!), and they retain the copyright, but have granted him a limited license to use the photo. From one of the DMCA notices: A website that your company hosts (according to WHOIS information) is infringing on at least one copyright owned by my company. An photograph of myself, that has a valid copyright by Lifetouch Portrait Studios Inc (“Lifetouch”) – which I have expressed permission to use, as an authorized user, to reproduce, distribute, and display my photograph.This copyrighted material was copied onto your servers without permission. “Lifetouch” only gave authorized permission for me to use it – they were the photographers when I took this professional headshot. Please find the original document indicated this has a valid copyright, and it is being used in violation of copyright laws, and is infringing on valid copyright laws that apply to the contents of what is in this post. It is noted on the copyright authorization form, that ”Federal and State copyright laws provide that the author of a work is the owner of it. Copying a work WITHOUT the author’s permission is a violation of the law. The only permission given is to the owner of the CD given with the images on it.” The copyright authorization form expre ssly states, “Any other copying is a violation of the copyright law and may subject the violater to criminal and civil prosecution.” If you'd like, you can also see the full copyright authorization notice. It's a pretty typical authorization notice from these kinds of studios, but Cedar seems to miss out on the fact that while it is giving him a license to reproduce or display the image, that does not necessarily give him the authorization to issue a legal threat over it as he is not the copyright holder. Nor does it appear that he is officially representing the actual copyright holder. Instead, he just quotes some of the authorization, which he appears to totally misunderstand. In giving him a non-exclusive license, Lifetouch still retains the actual copyright, and thus is the only one who can issue such a takedown or take any legal action over the photograph (which it shouldn't do because it's clearly fair use anyway). And while it's unlikely that Lifetouch gave anyone else a license to use Ceder's image, he doesn't actually know that. Greenfield certainly didn't need a license (it's fair use), but he simply assumes that because Lifetouch gave him a non-exclusive license, it didn't give one to anyone else. Yet he has no evidence of that at all. Finally, while Ceder quotes the silly and misleading copyright language on the authorization form, that language was meant for him and not for others. That language has no actual impact on Greenfield's use, which again is clearly protected fair use. Besides, that copyright notice is pretty bogus. Even referencing state copyright laws makes no sense, because photographs are strictly covered by federal copyright law, not state copyright laws (which, other than the rare exception of pre-1972 sound recordings, basically doesn't even exist any more). And, again, using a headshot in a blog post with commentary about the person is well-established fair use, so the bogus claim that any copying is infringement is just wrong. But, really, it's especially silly and ridiculous that this is coming from a guy who pretty clearly did infringe someone else's copyright in copying their entire article, and he's now using his total misunderstanding of copyright to claim that any copying is infringement. So, hopefully either Greenfield will file a counternotice or the legal team at CloudFlare will reject such a bogus takedown notice (fwiw, CloudFlare probably doesn't host the site anyway, and could only pass on the notice to the actual host). And Carl David Ceder remains on display as a lawyer who doesn't seem to get copyright law at all, and also has a habit of reacting badly to people calling him out for his own bad behavior. Trying to censor Greenfield's post calling him out is pretty ridiculous. Abusing the law by filing a bogus DMCA takedown, falsely representing himself as the copyright holder (or representing the copyright holder), is even more problematic. Oh, and finally, I emailed Ceder using the email address he included in the DMCA takedown notice which he said was there to email him if CloudFlare wanted "further information." I asked him a few questions about the notice, but the email immediately bounced back, saying that it was an "alias" that was not found on Office365. But... then Carl emailed me back anyway (suggesting that the email does work, but he also tried to set up some sort of alias that failed), claiming he had no idea what I was talking about and didn't even know what the DMCA was. This seems... difficult to believe. The DMCA notice appears to come from his email, and has his signature file as well. It links to a version of that JC Penney copyright authorization that was uploaded to a Scribd account today on an account named "CarlDavid Ceder." I also called him (voicemail) and emailed again asking how, if this wasn't him, someone else got their hands on this copyright authorization and is now going around pretending to be him and filing questionable DMCA notices on his behalf. In response, he did not answer this question, but again insisted that he has no idea what I'm talking about. I guess it's possible that someone is trying to make him look bad by filing a bogus DMCA notice, though that seems like an awfully weird con -- and it's still not clear how that person would have gotten access to the JC Penney document. The other alternative, I guess, is that Ceder hired one of those online reputation management companies, and they're doing this. But, even if that were the case, then why wouldn't that company include one of its own email addresses as the "further information" email in the DMCA takedown notice (unless that's what the broken email alias is supposed to be). Either way, the Occam's Razor most likely answer is that Ceder did send the takedown, and didn't want to admit it to me, but I'm open to other possible explanations. Seeing as none has arrived as of yet, I believe the existing story stands.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
What is it with South American historical figures suddenly thinking they can control everything to do with their family names? You'll hopefully recall the brief existence of a case of publicity rights violation brought against Activision by Manuel Noriega over the depiction of him in the gamemaker's Call of Duty series. That case was quickly tossed out by the court because the First Amendment has just an tiny bit more weight when it comes to artistic expression than does any publicity rights for public historical figures from other countries that might, maybe, kinda-sorta exist, possibly. We might have struggled at the time to find a complainant less likely than Noriega to win this sort of long-shot in the American court system, but we need struggle no longer. Roberto Escobar, brother and former accountant to drug kingpin Pablo Escobar, has sent a letter to Netflix demanding a billion dollars (not a joke) and the right to review all future episodes of the streaming company's hit show Narcos, to make sure that he and his family are portrayed accurately. The letter, first published by TMZ (which explains the massive TMZ watermark on it) is quite a read. “In the first season of Narcos, there were mistakes, lies and discrepancies from the real story,” the letter says. “To this date, I am one of the few survivors of the Medellin cartel, and I was Pablo’s closest ally, managing his accounting and he is my brother for life. I think nobody else in the world is alive to determine the validity of the materials, but me.” Escobar adds that he is seeking $1 billion in compensation, and “if they decline my offer we have attorneys ready to proceed with necessary actions” over misappropriation of the Escobar name. “I don’t think there will be any more Narcos if they do not talk to me,” he says. “They are playing me without paying. I am not a monkey in a circus, I don’t work for pennies.” Okay, so let's unpack this a little. For starters, Roberto Escobar isn't even in the television series. Like, at all. He's not even mentioned. Using a handy thing called creative license, the show portrays Pablo's accountant as someone completely different, not related to the family. Which means this is all about Roberto Escobar claiming exclusive rights over the portrayal of other Escobars, which is an interesting legal concept in that it has almost no grounding in any kind of reality. First, Escobar makes no claim to any actual official intellectual property rights over his name. None. Instead, he touts his knowledge of the inner workings of the drug operation as the reason why he exerts this control. This novel legal theory is wholly unlikely to find any purchase within the American legal system. And, even if it were, as was the case with Noriega's lawsuit, the First Amendment trumps any kind of publicity rights that might exist, in particular when we're talking about historical figures such as pretty much every named real person in the Narcos series. Certainly Pablo Escobar qualifies, as would most of his notorious gang. Instead, this is likely an attempt by Roberto to make enough noise to have Netflix hire him on to have some involvement in the show. He's apparently sent them letters in the past requesting this, prior to his request for the paltry sum typically reserved for Dr. Evil. Though I admit it would be comical to see him actually try this tactic in court. Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
As numerous posts on Techdirt attest, the authorities really don't like Tor, even though the Onion routing system was developed by the US Naval Research Laboratory, not some terrorist hacker group. The latest jurisdiction to misunderstand how Tor works is Poland, as this report on Motherboard explains: Polish authorities have requested British law enforcement to interrogate the node operator because of a 2014 forum post supposedly insulting the ex-mayor of a small Polish town; apparently an illegal act in Poland. Specifically: A letter from the District Public Prosecutor's Office in Bialystok, Poland, to the UK Home Office points to Article 212, paragraph 2 of the Polish Penal Code, which says, in sum, that characterising someone else in such a way that might "degrade them in public opinion or expose them to the loss of confidence necessary to occupy a given position […] is subject to a fine or the penalty of limitation of liberty." The Tor exit node used by the person who allegedly wrote the problematic post is run by Thomas White, better known as TheCthulhu on Twitter, where his bio reads: Technology and privacy activist. Hidden service dev. Turkey-certified terrorist. Radical giver of no shits. It will therefore come as no surprise that White is unsympathetic to the request by the District Public Prosecutor's Office in Bialystok. Even better, he has posted part of his statement in reply to that request, which is well-worth reading. White points out that the Polish law in question seems to violate Article 19 of The Universal Declaration of Human Rights, further enshrined as Article 10 of the European Convention on Human Rights. He says that he accepts the ex-mayor in question may have found a statement about him to be humiliating or offending, but adds: I have many times felt offended where his political party have made derogatory remarks concerning the LGBT community for example, or where his complaint is an attempt to trample upon the rights of others. The difference is that I seem to have the mental capacity to take the opinions of others on board and reason my views with them to make my points. White concludes pretty much as you might hope and expect: I can only reaffirm my position that I have no intention of assisting with the request from the Polish authorities Of course, the great thing about Tor is that White couldn't help the Polish authorities even he wanted to, since he was just operating the exit node, and knows nothing about the origin of the Tor traffic he facilitates. The sooner governments learn this basic fact, the sooner they can stop wasting time and resources trying to extract information from people that don't have it. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Test prep for certification tests can be expensive. The Essential CompTIA & Microsoft Windows Server Administrator Bundle helps to prepare you to take 6 essential IT certification exams for only $69. You will have access to prep materials for the CompTIA A+ 220-901 and 220-902, CompTIA Network+ N10-006, Microsoft Exams 70-410, 70-411, and 70-412. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Yeah, so the Sony Pictures hack is basically old news at this point. People have gone through it for all the juicy details and it's been out of the news for quite some time. So, apparently, one Sony "legal affairs" exec decided that perhaps he could engage in a little copyfraud to try to hide some info without anyone noticing. As TorrentFreak first noticed, however, Sony Pictures Legal Affairs VP Daniel Yankelevits wasn't particularly subtle in sending a DMCA notice to Google, asking it to delist the Wikileaks page with a search engine for all of the Sony Hack emails. The full DMCA notice is as stupid as it is faulty: There are oh so many things wrong with this -- many of which you'd think a "legal affairs" VP at a giant entertainment company would know about before sending it. But, to be fair, Yankelevits appears to be more of a contracts / "dealmaker" legal exec, rather than an intellectual property expert. But, still... Yankelevits gets almost everything wrong with this bogus takedown. Let's count the ways: This is not a legitimate DMCA notice by any means. He does not specify what copyright is being infringed (because none is). "It's not right" is not a claim of infringement. His salary info ($320,000 possibly rising to $330,000, by the way) is not copyright covered material. His clueless request asks for "https://wikileaks.org/sony/emails" to be removed. That's the front page for Wikileaks' archive of all the leaked Sony emails. That means that the actual email wouldn't even have been removed from Google's Index if Google had complied (which it did not). Clearly, Yankelevits does not hold the copyright on the email in question, which was not written by him. Yankelevits sent the bogus DMCA takedown on behalf of Sony Pictures, despite there clearly being a personal motive behind it. It makes you wonder if Sony Pictures lets any exec just file DMCA notices in its name. Yankelevits lists the actual email URL as the "original URL" which makes no sense. The "original URL" is supposed to be where the content was copied from. So, here we have a Sony Pictures legal exec filing a DMCA notice so stupid that it fails to make a copyright claim, fails to list the infringing work, and instead points to the email he really wants taken down as the "original" work, and demands a different URL (which doesn't have the info he's trying to hide) get taken down -- and it's all because he doesn't want his salary posted, because "it's not right" which is, you know, not how copyright law works, at all. But it does give you some enlightenment into how a top lawyer at Sony Pictures actually recognizes that the DMCA is a tool for censorship, yes? Well, that and the caliber of the legal minds working at Sony Pictures in their "dealmaking" division.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Last fall, our think tank, the Copia Institute, released a paper, The Carrot or the Stick? which detailed how innovation in the form of convenient, appealing and reasonably priced legal content streaming services appeared to be the most powerful tool in reducing piracy. The report looked at a number of different data sources and situations in multiple different countries. And what we found, over and over again, was pretty straightforward: ratcheting up enforcement or punishment did not work -- or, if it did work, it only worked exceptionally briefly. However, by introducing good, convenient authorized services, piracy rates fell, like off a cliff. We saw this pattern repeated over and over again. And yet... instead of seeing policymakers and legacy content companies pursue strategies to encourage more innovation and more competition in authorized services, they continually focus on enforcement and punishment. This makes no sense at all. Take the situation in the UK, for example. Last week, the UK's Intellectual Property Office (IPO) came out with a report noting that piracy in the UK had dropped significantly in the wake of authorized streaming services like Spotify and Netflix entering the market. The full report is worth reading and pretty clearly suggests -- as our own report last year did -- that having good authorized services in place is the best way to reduce piracy. The IPO’s report, carried out by research group Kantar Media, suggested a strong link between the rise of such services and falling piracy. 80pc of music listeners now use exclusively legal means, up from 74pc a year ago This is all great and consistent with what we found in basically every country we looked at. But that's why it's equally troubling that, rather than supporting that innovative ecosystem that is successfully diminishing piracy, the UK's IPO has moved forward with its ridiculous plan to jail pirates for 10 years. As we described in great detail a few months ago, the IPO's support of 10-year prison sentences for copyright infringement was not only based on no actual data, and pulled out of thin air, but it directly contradicted numerous studies on the deterrence effect of longer prison sentences. I spoke to people at the IPO (many of whom are quite reasonable) after the recommendation came out, and they insisted that the 10-year prison sentence would only be used for "true criminals" and not just people sharing files online. They apparently also promised Open Rights Group that the specifics would be clarified in the final bill so as not to target ordinary people file sharing online -- but that's not what happened: Partly in an attempt to deal with headlines that this was “10 years for filesharing", the IPO has rewritten the definition of criminal liability. They told us during meetings that the new definition would make it very clear that ordinary internet users - including filesharers - would not be targeted, and raising the penalty would also mean narrowing its application to real criminals. Unfortunately the final draft appears to be as bad or worse than the original, with a very low threshold of “having a reason to believe” that the right holder will be exposed to “a risk of loss”. So, what the hell is going on at the IPO over there? They have clear research showing that a massively effective way to reduce piracy is to get more good, convenient authorized services. And they have no research backing up the idea that increased prison sentences will reduce infringement. And yet, which one have they doubled down on? This is why people have so little respect for copyright law and why we so often refer to it as "faith-based" policy making. The evidence clearly points in one directly, and the powers that be, instead, go in the other direction, against all the evidence, because some people "feel" that piracy must be punished to make it stop.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
A lot of the problem with access is the access itself. Give enough people a way to look up compromising information on nearly anyone and abuse is guaranteed. Human nature ensures this outcome. Sure, abuse could be curbed with actual, substantial punishments for abusing this access, but as we've seen time and time again, the threat of firings and jail time doesn't mean much if law enforcement officers are rarely, if ever, fired/jailed for abusing their access privileges. The larger problem with access is the lack of strong deterrents. Access is essential to law enforcement work, but far too often, this access is used for anything but law enforcement reasons. Big Brother Watch has released a report [PDF] detailing numerous abuses of law enforcement databases by UK police staff over the past several years. Between 2011-2015, there were more than 800 individual UK police personnel who raided official databases to amuse themselves, out of idle curiosity, or for personal financial gain; and over 800 incidents in which information was inappropriately leaked outside of the police channels. The incidents are reported in a new Big Brother Watch publication, which also reports that in most cases, no disciplinary action was taken against the responsible personnel, and only 3% resulted in criminal prosecution or conviction. The report is an altogether depressing read. It shows that UK police staff can often be no better than the people they're supposed to be protecting citizens from -- like malevolent hackers, serial harassers, and mob bosses. Safe in Police Hands? shows that between June 2011 and December 2015 there were at least 2,315 data breaches conducted by police staff. Over 800 members of staff accessed personal information without a policing purpose and information was inappropriately shared with third parties more than 800 times. Specific incidents show officers misusing their access to information for financial gain and passing sensitive information to members of organised crime groups. A majority of these "breaches" resulted in nothing at all happening to violators. 1283 (55%) cases resulted in no disciplinary or formal disciplinary action being taken. The breaches range from the stupid… An officer found the name of a victim amusing and attempted to take a photo of his driving licence to send to his friend via snapchat. The officer resigned during disciplinary action. ... to the disturbing. An officer has been suspended and is under investigation for abusing his position to form relationships with a number of females. It is suspected that he carried out police checks without a policing purpose. Even as law enforcement agencies demand access to more data and work with national agencies to obtain additional personally-identifying information, like biometric data, they continue to handle this sensitive data with extreme carelessness. Kent Police were fined £100,000 in March 2015 after leaving hundreds of evidence tapes and additional documents at the site of an old police station. The breach was only discovered after an officer visited the new owner of the premises and discovered them by accident. In a similar incident South Wales Police were fined £160,000 in May 2015 for losing a video recording which formed part of the evidence in a sexual abuse case. Due to a lack of training the loss went unreported for two years. The long list of breaches listed in the report covers everything from improper access to abuse of CCTV footage to hacking into private Facebook accounts. In numerous cases, officers resigned while under investigation rather than face the consequences of their actions. This is why Big Brother Watch suggests UK police officials -- and the government agencies that oversee them -- need to start taking this far more seriously than they currently do. One recommendation is to prevent abusers from slipping away unscathed by leaving the force. Where a serious breach is uncovered the individual should be given a criminal record. At present people who carry out a serious data breach are not subject to a criminal record. They could resign or be dismissed by an organisation only to seek employment elsewhere and potentially commit a similar breach. In organisations which deal with highly sensitive data, knowing the background of an employee is critical. The organization also suggests the government should put a few more teeth in its enforcement by attaching jail time to serious breaches -- something current law only hints at, rather than requires. Big Brother Watch also recommends mandatory, immediate disclosure of breaches to the victims whose records were improperly accessed. It also recommends the Snooper's Charter proposal to add citizens' online activity to law enforcement databases be rejected, if only because agencies have shown they can't secure the data they already have access to. Giving agencies with a track record of abuse access to even more potentially sensitive data -- without instituting serious deterrents -- is only asking for more trouble. Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
The DEA will no longer be able to waltz into Riverside County (CA) judge Helios Hernandez's chambers and walk out with signed wiretap warrants. I mean, they'll still be able to get Judge Hernandez to sign warrants. After all, no one does it better: Nearly all of that surveillance was authorized by a single state court judge in Riverside County, who last year signed off on almost five times as many wiretaps as any other judge in the United States. He's so efficient even the DEA can't quite wrap its mind around it. Hernandez approved 20 times as many wiretaps as his counterparts in San Bernardino County. DEA officials said they could not explain that difference. The DEA never let Rule 41 jurisdiction limitations bother them. Agents used wiretap warrants to track suspects all over the nation. The DEA also didn't let the DOJ's hesitancy to condone its actions/warrants get in the way of its drug warring. DOJ lawyers heavily hinted that if the DEA wanted to use questionable wiretap warrants, it had better not be dragging its raggedy affidavits into federal court. But drag those affidavits into federal court it did, forcing the DOJ to defend the very warrants it told the DEA to stop dropping off at its place. The DOJ's lawyers said the toxic, possibly illegal warrants were actually 100% legal, perfectly compliant with federal and state law -- even though they were missing the signature of the local District Attorney, as required by federal law. The DEA -- having had its bogus warrant assembly line exposed by USA Today's Brad Heath and Brett Kelman -- is finally moving towards curbing its wiretap abuse. The Drug Enforcement Administration has ordered its agents to seek input from federal prosecutors before tapping Americans’ phone calls or text messages, months after it came under fire for a vast and legally questionable eavesdropping program in the Los Angeles suburbs. The rules are a significant change for the drug agency, which had dramatically increased its use of wiretaps over the past decade by seeking authorization from state judges and prosecutors who were willing to approve the surveillance more quickly and with less scrutiny. In theory, this means DEA agents will have to have federal prosecutors sign off on affidavits/warrants before running them past whoever happens to be manning the desk at the local DA's office. This won't necessarily make them more compliant with federal law, as it has historically been truly rare to find the local DA actually in his office, but it does mean there will finally be some oversight in place. To date, the only "oversight" the DEA has had to endure is the occasional DOJ lawyer telling agents "no fucking way" (ACTUAL QUOTE) whenever they approached federal prosecutors with a drug bust. And, unless DEA brass is really serious about changing the agency's shady methods, it's likely nothing will change. Drug warriors, like water, seek their own level, running downhill against the least resistance. Wiretaps are considered so intrusive that federal law requires approval from a senior Justice Department official before agents can even ask a federal court for permission to conduct one. The law imposes no such restriction on state court wiretaps, even when sought by federal agents. Unless the DEA (or Congress) closes this loophole, nothing will change. There may be a temporary improvement, but it will be just that: temporary. The DEA has long been used to jumping zero hurdles on its way to intercepting communications. There's no reason to believe it won't revert to form unless steps are taken to prevent it. In fact, the DEA's actions will probably have less effect on its wiretap abuse than the installation of a new district attorney. As of the end of February, DA Mike Hestrin had only approved 14 wiretap warrants -- a huge decrease from the 126 approved over the same two-month period last year. Permalink | Comments | Email This Story

Read More...
posted 15 days ago on techdirt
This week, our first place winner on the insightful side is an anonymous commenter who had some thoughts on the notion of allowing a court case to disappear entirely because the plaintiff regrets it: This is a hard case. You feel the plaintiff here is sympathetic (and I'll have to take your word on that), and in this particular instance you seem to have no appetite for dragging the plaintiff in question back into the open. But hard cases make bad law. We cannot expect every "disappeared" case to involve a sympathetic plaintiff. Imagine if the likes of Malibu or Prenda were allowed to disappear old cases from court history. And we cannot know, presently, if cases have been completely removed from public view. This cannot be allowed to stand. It sucks for this particular plaintiff, and I'm very sorry that the consequences of their actions may put them in more difficulty than we feel they deserve. (Karma can be a real bitch that way.) But we absolutely cannot allow courts to disappear cases from public view. It would threaten (or further destroy) our trust in an open and fair rule of law. Once you disappear one case, what stops you from doing so for other reasons? In second place, we've got rw with a response to the latest abuses by the TSA: Time to kill this agency. They do absolutely no good and cause immense problems. Defunding them would free up money to help those in need. For editor's choice on the insightful side, we start out with a response from Norahc to the FBI's easygoing stance on Hillary Clinton's emails: Being stupid or dishonest is a crime if you're unlucky enough to get caught up in a FBI fabricated terrorism plot, but evidently it's not if your stupidity and dishonesty actually risked national security. I guess that means if she's elected president there will be another private email server set up. Next, we head to the interesting pro-fair-use language in the the Supreme Court's latest Kirtsaeng ruling, where one commenter suggested that we simply support diminishing protections for artists, and OldMugwump offered his perspective: I'll say it yet again. Artists must get rewarded for creating things people value. I don't know anyone who disagrees. But copyright is no longer a good way to do it. It used to be a good way - before copying became trivially easy. Now we need a new way. Personally I like automated patronage - electronic "tip jars" that ensure micropayments go straight to artists (not middlemen) each time a work is enjoyed. But I'm sure there are other ways as well. We have to stop defending the dead horse of copyright, and start moving on to something that will actually help artists. Over on the funny side, our first place winner is again an anonymous commenter, this time responding to Mike Huckabee's settlement with the band Survivor by serving up a good ol' song parody: It's the eye of the liar It's the shrill of the right Sinking down when the issue ends up viral And the last known survivor Sues his prey in this fight And he's watching us all with the eye of his lawyers For second place, we head to our post about the arrest of a man who posted a picture of himself burning the American flag on the 4th of July, where Tim Geigner reminisced about the day George Washington punched King George in the face, and one commenter noted that they'd love to have a picture of the event. Then, an anonymous commenter expanded further on the mythology: I think Abe Lincoln might have captured it on his iPhone, but the video is a little shaky because the bald eagle he was flying on wouldn't hold still. It was probably spooked by the double gatling guns that Jesus was using to hold off the redcoat reinforcements. For editor's choice on the funny side, we start out with yet another anonymous commenter, this time catching an error in one of our headlines: Comcast Continues To Claim It's 'Not Feasible' To Offer Its Programming To Third-Party Cable Boxes You misspelled "Tyrannically Profitable". That is 23 characters in two words not 8 characters in one word. And finally, we've got Baron von Robber with a slick quantum computing joke: Quantum computers will be terrible for tech support. "Have you turn it off and on again at the same time without looking at it?" That's all for this week, folks! Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Five Years Ago The fight over PROTECT IP was heating up this week in 2011, with law professors joining the ranks of those opposed to the bill while Hollywood ramped up its smear campaign against Senator Ron Wyden, and Senator Jerry Moran removed himself as a co-sponsor of the bill. Of course, this wasn't the only bad bill being considered — there was also the anti-streaming bill, which caught the attention of video game streamers and was met with a mass of YouTube video protests. Meanwhile, the entertainment industry was busy moving ahead of the law by signing the major US ISPs onto a "five strikes" plan for copyright infringement. Those who received strikes would have to pay to contest them, and it looked like the industry had backdoored in the disconnection powers it so desired. But the most memorable thing to happen this week in 2011 was, of course, the unveiling of the famous (and fascinating/contentious from a copyright perspective) monkey selfie. Ten Years Ago This week in 2006, the RIAA was busy suing sites around the globe, with the latest target being Allofmp3.com in the UK. We were skeptical of this approach, but the Associated Press certainly seemed to have bought the scare stories about global piracy in full. The RIAA was also failing on the home front, with university students seeing right through its terrible "free" music service. Hollywood was busy taking down the free promotion it got from its fans, and after a German magazine noted that you can technically pirate a movie by simply screencapping every frame, we wondered how long it would take for the MPAA to try to ban the Print Screen button. There was a big, memorable moment this week in 2006 too: Senator Ted Stevens offered his infamous "series of tubes" explanation for the nature of the internet. Fifteen Years Ago Last week, we noted that Amazon introduced a free shipping program for the first time. This week in 2001, Barnes & Noble followed suit, and managed to do so without raising any prices on Monday. But then, on Friday... Amazon ended its free shipping program, calling it an experiment. Such was the dance of the early online retailers. We saw the early rumblings of a legal response to the problems of cyber-bullying, and early takes on how to deal with (or possibly flat-out ban) the use of cellphones while driving. We even saw the earliest of baby steps down the long road to Uber with Ireland experimenting with the ability to get cabs by texting. And in a move that may not have seemed revolutionary at the time, but was actually a first step towards opening up lots of enlightening data, Google unveiled its "Zeitgeist" product for exploring the most popular searches and trends. Twenty-Six Years Ago Techdirt has been around for a long time, but the folks at the EFF still have a few years on us: it was on July 6th, 1990 that the EFF was founded by John Perry Barlow and Mitch Kapor after both faced inquiries by law enforcement agents who were clueless about technology. Happy birthday, EFF! Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Limited time offer: Support Techdirt and get a Nerd Harder t-shirt! Did you miss your chance to get one of Techdirt's Nerd Harder t-shirts? Well, you're in luck — since the campaign on Teespring ended, rebooted, and ended again, enough people have reserved a shirt to cause it to reboot once more! So now you've got another chance to get your hands on one: This latest batch is only available until the end of the weekend, so hurry up and claim one unless you want to wait for another reboot! Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
I saw a lot of excitement and happiness a week or so ago around some reports that the EU's new General Data Protection Regulations (GDPR) might possibly include a "right to an explanation" for algorithmic decisions. It's not clear if this is absolutely true, but it's based on a reading of the agreed upon text of the GDPR, which is scheduled to go into effect in two years. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them. Lots of people on Twitter seemed to be cheering this on. And, indeed, at first glance it sounds like a decent idea. As we've just discussed recently, there has been a growing awareness of the power and faith placed in algorithms to make important decisions, and sometimes those algorithms are dangerously biased in ways that can have real consequences. Given that, it seems like a good idea to have a right to find out the details of why an algorithm decided the way it did. But it also could get rather tricky and problematic. One of the promises of machine learning and artificial intelligence these days is the fact that we no longer fully understand why algorithms are deciding things the way they do. While it applies to lots of different areas of AI and machine learning, you can see it in the way that AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end. The more machine learning "learns" the less possible it is for people to directly understand why it's making those decisions. And while that may be scary to some, it's also how the technology advances. So, yes, there are lots of concerns about algorithmic decision making -- especially when it can have a huge impact on people's lives, but a strict "right to an explanation" seems like it may actually create limits on machine learning and AI in Europe -- potentially hamstringing projects by requiring them to be limited to levels of human understanding. The full paper on this does more or less admit this possibility, but suggests that it's okay in the long run, because the transparency aspect will be more important. There is of course a tradeoff between the representational capacity of a model and its interpretability, ranging from linear models (which can only represent simple relationships but are easy to interpret) to nonparametric methods like support vector machines and Gaussian processes (which can represent a rich class of functions but are hard to interpret). Ensemble methods like random forests pose a particular challenge, as predictions result from an aggregation or averaging procedure. Neural networks, especially with the rise of deep learning, pose perhaps the biggest challenge—what hope is there of explaining the weights learned in a multilayer neural net with a complex architecture? In the end though, the authors think these challenges can be overcome. While the GDPR presents a number of problems for current applications in machine learning they are, we believe, good problems to have. The challenges described in this paper emphasize the importance of work that ensures that algorithms are not merely efficient, but transparent and fair. I do think greater transparency is good, but I worry about rules that might hold back useful innovations as well. Prescribing exactly how machine learning and AI needs to work too early in the process may be a problem as well. I don't think there are necessarily easy answers here -- in fact, this is definitely a thorny problem -- so it will be interesting to see how this plays out in practice once the GDPR goes into effect.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Well, the era of robocop has begun. As you've probably heard already, in order to get the sniper in Dallas who shot and killed a whole bunch of police, the Dallas police apparently sent in a bomb robot to detonate a bomb. Normally that robot is designed to save people from bombs, but in this case the police decided to use it to deliver a bomb and blow up the guy, Micah Xavier Johnson, accused of doing the shooting. The city apparently recently got 3 Remotec robots for its bomb squad: Each one apparently costs about $200k. In asking around, it appears that those who are familiar with bomb robots can't find any examples of police using them in this way in the past. Though, of course, people have certainly raised the theoretical question of using remote automated systems, whether robots or drones, to take down killers who are on the loose. The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don't think anyone would question the police justification if they had shot Johnson. But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don't have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all? Furthermore, it actually makes you wonder, why isn't there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn't faulting the Dallas Police Department for its actions last night. But, rather, if we're going to enter the age of robocop, shouldn't we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
This has been rumored before, and perhaps isn't a huge surprise due to Whatsapp's use of end to end encryption, but Facebook has launched a trial of end to end encryption in Facebook messenger, under a program it's calling "Secret Conversations" (which also allows for expiring conversations). It’s encrypted messages, end-to-end, so that in theory no one—not a snoop on your local network, not an FBI agent with a warrant, not even Facebook itself—can intercept them. For now, the feature will be available only to a small percentage of users for testing; everyone with Facebook Messenger gets it later this summer or in early fall. What's good to see is that Facebook is directly admitting that offering end to end encryption is a necessary feature if you're in the messaging business today. “It’s table stakes in the industry now for messaging apps to offer this to people,” says Messenger product manager Tony Leach. “We wanted to make sure we’re doing what we can to make messaging private and secure.” This is a good sign. For years, tech companies more or less pooh-poohed requests for encryption, basically suggesting it was only tinfoil hat wearing paranoids who really wanted such things. But now they're definitely coming around (something you can almost certainly thank Ed Snowden for inspiring). And, not surprisingly, Facebook is using the Signal protocol, which is quickly becoming the de facto standard for end to end encrypted messaging. It's open source, well-known and well-tested, which doesn't mean it's perfect (nothing is!), but it's at least not going to have massively obvious encryption errors that pop up when people try to roll out their own. Some security folks have been complaining, though, that Facebook decided to make this "opt-in" rather than default. This same complaint cropped up recently when Google announced that end to end encryption would be an "option" on its new Allo messaging app. Some security folks argue -- perhaps reasonably -- that being optional rather than default almost certainly means that it won't get enough usage, and some users may be fooled into thinking messages are encrypted when they are not. Facebook's Chief Security Officer, Alex Stamos (who knows his shit on these things) took to Twitter (not Facebook?) to explain why its optional, and makes a fairly compelling set of arguments (which also suggest that there's a chance that end to end encryption will eventually move towards default). A big part of it is that because of the way end to end encryption works (mainly the need to store your key on your local device) that makes it quite difficult to deploy on a system, like Facebook Messenger, that people use from a variety of interfaces. Moxie Marlinspike, the driving force behind Signal has already pointed out that Signal protocol does support multi-device, so hopefully Facebook will figure it out eventually. But in the short term, it would definitely change the way people use Messenger, and it's at least somewhat understandable that Facebook would be moderately cautious in deploying a change like this that would end up removing some features, and potentially confusing/upsetting many users of the service. Over time, hopefully, end to end encryption can be simplified and rolled out further. As some cryptogrphers have noted, this is a good start for a company with hundreds of millions of users on an existing platform in moving them towards encryption. A ground up solution probably should have end to end enabled by default, but for a massive platform making the shift, this is a good start and a good move to protect our privacy and security. Anyway, anyone have the count down clock running on how long until someone from the FBI or Congress whines about Facebook doing this?Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Journalist Terri Buhl -- who gained a bit of Techdirt infamy by claiming her public tweets couldn't be republished (which led to wild claims of copyright infringement and defamation) -- is still dealing with some legal woes of her own, stemming from the posting of someone else's actually private information to Facebook. Teri Buhl, who was found guilty on misdemeanor charges of harassment and breach of peace, was sentenced to 30 days in jail, one-year probation and a strict order not to interact with the victims' family, in Norwalk Superior Court today. The New Canaan woman was accused of harassing her then-boyfriend's daughter by posting parts of the girl's private journals online in 2010. Buhl, 40, was acquitted of interfering with a police investigation. Buhl appealed this decision, but has been denied by the Connecticut Supreme Court. In overturning [PDF] the district court's findings, the Supreme Court reaches some (not all that great) conclusions about social media platforms. Buhl had posted these private journal entries to a publicly-accessible Facebook page under the pseudonym of "Tasha Moore." (According to prosecutors. This name isn't referenced anywhere in the court decision.) While posts to this page were accessible by anyone, the posting itself would have only normally have been seen by "friends" of the account. There's a lot of convolution to be sorted through, based on some severe disparities between the complainant's version of the events (which itself is somewhat contradictory) and Buhl's version, which begins with an entirely unrelated account. (Her version is here. She makes no mention of the "Tasha Moore" alias and -- I wish I were making this up -- compares her misdemeanor jailing to "an episode of Making a Murderer.") What seems to be (relatively) clear is that the post originally reached only eight people, most likely because they were tagged. The teen whose journal pages were posted to Facebook originally could only reach it through a friend's Facebook account. Then she said she could reach it through her own. Either way, the Supreme Court's finding is that the pages were published publicly, even though the number of people who had access to the posted pages was, at first, extremely limited. Here's Venkat Balasubramani's take on this part of the ruling: The court never reconciles the court of appeals’ definition of public (generally available) with the fact that the material here was posted to a page that 8 people were invited to. And the court’s discussion of the Facebook privacy settings and the victim’s testimony were equivocal at best. The victim testified that she had to view the page in question “through her friend’s” Facebook account, but once there, she could “could see everything through [hers]”. Say What!? The existence of how-to materials instructing users on the use of Facebook’s privacy settings is a testament to the average consumer’s challenges in understanding them. Given the difficulty courts and litigants have had with Facebook’s privacy settings, it was surprising to see the court conclude that the accessibility of the page was a matter of sufficient common knowledge and experience that anyone could testify about it. At a minimum, the court could have said that the state failed to carry its burden on this element of the offense. The larger problem with the decision is that it relies on another badly-written law put into place to deal with harassment. Apart from finding that the state sufficiently proved the defendant posted the material in question, the court also finds the remaining elements of the offense met. Specifically, the court says there was sufficient evidence that the defendant posted material about the putative victim with the intent to inconvenience, annoy, or alarm the victim. The court looks to the posts and the surrounding context to find that the posts could easily have “vexed” or “provoked” the victim. "Inconvenience" and "annoyance" are supremely low bars, ones that could be met by every single advertiser on the planet. So, it's unsurprising that a post sent to eight people -- in which the complainant was never specifically targeted -- could meet the stipulations of the statute. The court also finds the delivery method used to put the journal pages in the hands of the parent whose daughter wrote them was harassing in and of itself. In the present case, the trial court reasonably could have found that the circumstances surrounding the mailing, the contents of the mailing, and the defendant’s behavior thereafter demonstrate beyond a reasonable doubt her intent to harass, annoy, or alarm P or M through the mailing. The defendant could have brought the diary entries to P, her boyfriend of more than two years, directly, but she instead, as she admitted, sent them anonymously. The anonymous nature of the mailing served to increase P’s and M’s anxieties because they did not know who had intruded into M’s bedroom and copied her diary entries, how the mailer had obtained the entries, or who else might have access to them. P, in fact, testified that he felt ‘‘violated’’ that M’s diary entries were in ‘‘someone else’s hands.'" Buhl's lawyer does make a good point about the decision, in a statement given to Eric Goldman and Venkat Balasubramani. We must catch up with the technology when it comes to our consideration of analogs. We have to begin not by abandoning well know and traditional definitions of legal and logical concepts but by attempting to marry them consistently with their functional equivalents in cyberspace. Today I am not so sure that my Facebook status or the the idea of a Facebook invitation is so clearly analogous to real world instantiations of the same that my inferences upon them can be treated identically. For me, I still believe that if I specifically invite 6 people to my private yard party by written invitation noting the same, and an uninvited guest walks through an open gate to my yard, my private party doesn’t morph into a public affair. This conclusion is an intuitive one for most; we need to ensure that reliance upon legal analysis of social media platform evidence comes with the same intuitive flair. So, there's that. On the other hand, the use of social media platforms -- if one isn't careful to lock down privacy settings -- is prone to allowing anyone to view posts never intended to be seen by them. That's the nature of the sharing beast. You might throw a private party in your backyard, but if your fence is too low, anyone can watch the proceedings and even interject their own comments on your choice of entertainment, food, etc. (to carry through with the metaphor). Buhl still maintains her innocence. She claims she never made the postings under the "Tasha Moore" name. She also claims the only reason she's being nailed for this is because she hasn't divulged the actual source of the journal pages and Facebook post. Buhl claims she's protecting a source and that this lack of disclosure should be covered under the First Amendment. The court never addresses these arguments, however. It instead notes the lower court may not have addressed this issue sufficiently, but refuses to entertain the argument itself. There's also an apparent lack of intent to annoy or harass. The journal page posts were never directed at the teen who had written them. Her discovery of it was secondhand. That's not to say posting of teens' private journals is a good thing -- especially when the poster has chosen to divulge personal info (full name, school attended) rather than redact it before publication. So, while this site's experience with Teri Buhl is largely based on stupid legal threats, there's really no reason to celebrate this decision. There's some very flawed logic on display and it's made worse by a law that makes it a criminal act to "annoy" someone. The evidence tying Buhl to the postings appears to be mostly circumstantial and the court's finding makes some bad assumptions about how "public" should be defined in the age of social media. Permalink | Comments | Email This Story

Read More...