posted about 1 month ago on techdirt
People tend to think of the GDPR as regulation companies must comply with. But thanks to a decision by the Court of Appeals for the EU earlier this month, there's particular reason to believe that ordinary Internet users will need to worry about complying with it as well. In this decision the court found that the administrator of a fan page on Facebook is jointly responsible with Facebook for the processing of its visitors' data. And, as such, the administrator must comply with applicable data processing regulations – which necessarily include the GDPR. The fan page at issue in this case appears to be run by some sort of enterprise, "Wirtschaftsakademie." But fan pages aren't always run by companies: as the court acknowledges, they are often run by individuals or small groups of individuals. Yet there doesn't appear to be anything in the ruling that would exempt them from its holding. Indeed, the court recognizes that its decision would inherently apply to them: Fan pages are user accounts that can be set up on Facebook by individuals or businesses. To do so, the author of the fan page, after registering with Facebook, can use the platform designed by Facebook to introduce himself to the users of that social network and to persons visiting the fan page, and to post any kind of communication in the media and opinion market user data a processor of the data for visitors to its page, and thus jointly responsible with Facebook for its handling. The problem is, compliance with data protection regulations like the GDPR is no simple matter. In fact, as this article suggests, the decision also potentially makes it even more complicated and expensive by expanding the jurisdiction of individual member states' data protection authorities (which was something that EU-wide regulation like the GDPR was actually supposed to minimize). [Eduardo] Ustaran expressed concern in his 2017 post about the potential for local DPAs’ authority to issue decisions that affect companies located in other areas, in this case, Facebook, whose EU representative is in Ireland. He says that this goes against the letter of GDPR’s one-stop shop goal. But even without this change to the GDPR's enforcement operation, the burdens of compliance were already a matter of concern. As discussed previously, compliance with the GDPR is difficult and expensive for even well-resourced companies. It's not something that individual Internet users are going to be able to easily manage, and that's a problem, because who would want to set up a Facebook fan page if doing so opened yourself up to such a crippling compliance burden? Which leads to the essential problem here. Some cheer the GDPR because it puts user privacy front and center as a policy priority. In and of itself, there's nothing wrong with doing so – in fact, it's an idea whose time has come. But it doesn't matter how well-intentioned a law is if instead of merely regulating otherwise lawful activity it ends up suppressing it. And it's especially problematic when that activity is expressive. Even if chilling expression weren't the intent, if that's the effect, then there is something wrong with the regulation. Furthermore, while it's bad enough if regulation chills the expressive activity of those well-resourced companies better able to navigate complex and costly compliance requirements, it's even worse if it chills the lawful and even desirable expressive activity of ordinary individuals. One of the things an Internet platform like Facebook does, and does well, is encourage the casual expression of ordinary people. If you have things to say, these platforms make it easy to say them to other people without you needing to invest in corporate structure or technical infrastructure before doing so. These are tools that help democratize expression, which ordinarily is something places claiming to value the principles of free expression should want to support. In fact, the more the antipathy against big companies, the more they should want to ensure that independent voices can thrive. But instead we're seeing how all this regulation targeted at those big companies instead attacks regular people trying to speak online. We've seen the same problem with SESTA/FOSTA too, where individual online speakers suddenly find themselves risking legal liability for how they interact with other speakers online. And now it's happening again in the GDPR context, where the very regulation ostensibly intended to protect people online now threatens to silence them. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Content trigger warning: this article will discuss a bunch of nonsense being said in a major American newspaper about Google. I fully expect that the usual people will claim that I am writing this because I always support Google -- which would be an interesting point if true -- but of course it is not. I regularly criticize Google for a variety sketchy practices. However, what this story is really about is why the Boston Globe would publish, without fact checking, a bunch of complete and utter nonsense. The Boston Globe recently put together an entire issue about "Big Tech" and what to do about it. I'd link to it, but for some reason when I click on it, the Boston Globe is now telling me it no longer exists -- which, maybe, suggests that the Boston Globe should do a little more "tech" work itself. However, a few folks sent in this fun interview with noted Google/Facebook hater Jonathan Taplin. Now, we've had our run-ins with Taplin in the past -- almost always to correct a whole bunch of factual errors that he makes in attacking internet companies. And, it appears that we need to do this again. Of course, you would think that the Boston Globe might have done this for us, seeing as they're a "newspaper" and all. Rather than just printing the words verbatim of someone who is going to say things that are both false and ridiculous, why not fact check your own damn interview? Instead, it appears that the Globe decided "let's find someone to say mean things about Google" and turned up Taplin... and then no one at the esteemed Globe decided "gee, maybe we should check to see if he actually knows what he's talking about or if he's full of shit." Instead, they just ran the interview, and people who read it without knowing that Taplin is laughably wrong won't find out about it unless they come here. But... let's dig in. What would smart regulation look like? You start with fairly rigorous privacy regulations where you have the ability to opt out of data collection from Google. Then you look at something like a modification of the part of the Digital Millennium Copyright Act, which is what is known as safe harbor. Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from, which is that no one can sue them for doing anything wrong. Ability to opt-out of data collection -- fair enough. To some extent that's already possible if you know what you're doing, but it would be good if Google/Facebook made that easier. Honestly, that's not going to actually have much of an impact really. I still think the real solution to the dominance of Google/Facebook is to enable more competition that can provide better services that can help limit the power of those guys. But Taplin's suggestion really seems to be going in the other direction, seeking to lock in their power, while complaining about them. The "modification" of the DMCA, for example, would almost certainly lock in Google and Facebook and make it nearly impossible for competitors to step up. Also, the DMCA is not "known as safe harbor." The DMCA -- a law that was almost universally pushed by the record labels -- is a law that updated copyright law in a number of ways, including giving copyright holders the power to censor on the internet, without any due process or judicial review of whether or not infringement had taken place. There is a small part of it, within Section 512, that includes a very limited safe harbor, that says that while actual infringers are still liable for infringement, the internet platforms they use are not liable if they follow a bunch of rules, including removing the content expeditiously and kicking people off their platform for repeat infringement. The idea that "Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from" is complete and utter nonsense, and the Boston Globe's Alex Kingsbury should have pushed back on it. The Copyright Office's database of DMCA registered agents includes nearly 9,000 companies (including ours!), because the DMCA's 512 safe harbors apply to any internet platform who registers. Google, Facebook and Twitter don't get special treatment. Furthermore, as a new report recently showed, taking away such safe harbors would do more to entrench the power of Google, Facebook and Twitter since all three companies can deal with such liability, while lots of smaller companies and upstarts cannot. It boggles the mind that the Boston Globe let Taplin say something so obviously false without challenging him. And, we haven't even gotten to the second half of that sentence, which is the bizarre and simply false claim that the DMCA's Section 512 means that "no one can sue them for doing anything wrong." Again, this is just factually incorrect, and a good journalist would challenge someone for making such a blatantly false claim. The DMCA's 512 does not, in any way, stop anyone from suing anyone "for doing anything wrong." That's ridiculous. The DMCA's 512 says that a copyright holder will be barred from suing a platform for copyright infringement if a user (not the platform) infringes on copyright and when notified of that alleged infringement, the platform expeditiously removes that content. In addition to that, thanks to various court rulings, the DMCA's safe harbors are limited in other ways, including that the platforms cannot encourage their use for infringement and they must have implemented repeat infringer policies. No where in any of that does it say that platforms can't be sued for doing anything wrong. If the platform does something wrong, they absolutely can be sued. It's simply a fantasy interpretation of the DMCA to pretend otherwise. Why didn't the Boston Globe point out these errors? I have no idea, but they let the interview and its nonsense continue. In other words, they have complete liability protection from being sued for any of the content that is on their services. That is totally unique. Obviously newspapers doesn’t get that protection. And of course also [tech giants] have other advantages over all other corporations; all of the labor that users put in is basically free. Most of us work an hour a day for Google or Facebook improving their services, and we don’t get anything for that other than just services. Again, they do not have "complete liability protection from being sued for any content that is on their services." Anything they post themselves, they are still liable for. Anything that that users post on their platform that the sites don't comply with the rules of Section 512, they can be liable for. All DMCA 512 is saying is that they can be liable for a small sliver of content if they fail to follow the rules set out in the law that was pushed for heavily by the recording industry. Next up, the claim that "obviously newspapers don't get that protection" is preposterous. Of course they do. A quick search of the Copyright Office database shows registrations by tons of newspaper companies, including the Chicago Tribune, the Daily News, USA Today, the Las Vegas Review-Journal, the LA Times, the Baltimore Sun, the Chicago Sun-Times, the Albany Times Union, the NY Times, the Times Herald, the Times Picayune, the Washington Times, the Post Standard, the Palm Beach Post, the Cincinnati Post, the Kentucky Post, the Seattle Post-Intelligencer, the NY Post, the St. Louis Post-Dispatch, the Washington Post, Ann Arbor News, the Albany Business News, Reno News & Review, the Dayton Daily News, Springfiled News Sun, the Des Moines Register, the Cincinnati Enquirer, the Branson News Leader, the Bergen News, the Pennysaver News, the News-Times, the New Canaan News, Orange County News, San Antonio News-Express, the National Law Journal, the Williamsburg Journal Tribune, the Wall Street Journal, the Jacksonville Journal-Courier, the Lafayette Journal-Courier, the Oregon Statesman Journal, the Daily Journal and on and on and on. Literally I just got tired of writing down names. There are a lot more. Notably missing? As far as I can tell, the Boston Globe has not registered a DMCA agent. Odd that. But, back to the point: yes, newspapers get the same damn protection. There is nothing special about Google, Facebook and Twitter. And by now Taplin must know this. So should the Boston Globe. Ah, but perhaps -- you'll argue -- he means that the paper versions don't get the same protection, while the internet sites do. And, you'd still be wrong. All the DMCA 512 says is that you don't get to put liability on a third party who had no say in the content posted. With your normal print newspaper that's not an issue because a newspaper is not a user-generated content thing. It has an editor who is choosing what's in there. That's not true of online websites. And that's why we need a safe harbor like the DMCA's, otherwise people stupidly blame a platform for actions of their users. And let's not forget -- because this is important -- anything a website does to directly encourage infringement would take away those safe harbors, a la the Grokster ruling in the Supreme Court, which said you lose those safe harbors if you're inducing infringing. In other words, basically every claim made by Taplin here is wrong. Why does the Boston Globe challenge none of them? What kind of interview is this? And we're just on the first question. Let's move on. What would eliminating the “safe harbor” provision in the Digital Millennium Copyright Act mean? YouTube wouldn’t be able to post 44,000 ISIS videos and sell ads for them. Wait, what? Once again, there's so much wrong in just this one sentence that it's almost criminal that the Boston Globe's reporter doesn't say something. Let's start with this one first: changing copyright law to get rid of a safe harbor will stop YouTube from posting ISIS videos? What about copyright law has any impact on ISIS videos one way or the other? Absolutely nothing. Even assuming that ISIS is somehow violating someone's copyright in their videos (which, seems unlikely?) what does that have to do with anything? Second, YouTube is not posting any ISIS videos. YouTube is not posting any videos. Users of YouTube are posting videos. That's the whole point of the safe harbors. That it's users doing the uploading and not the platform. And the point of the DMCA safe harbor is to clarify the common sense point that you don't blame the tool for the user's actions. You don't blame Ford because someone drove a Ford as a getaway car in a bank robbery. You don't blame AT&T when someone calls in a bomb threat. Third, YouTube has banned ISIS videos (and any terrorist propaganda videos) going back a decade. Literally back to 2008. That's when YouTube stopped allowing videos from terrorist organizations. How could Taplin not know this? How could the Boston Globe not know this. Over the years, YouTube has even built new algorithms designed to automatically spot "extremist" content and block it (how well that works is another question). Indeed, YouTube is so aggressive in taking down such videos that it's been known to also take down the videos of humanitarian groups documenting war crimes by terrorists. Finally, YouTube has long refused to put ads on anything deemed controversial content. Also, it won't put ads on videos of channels without lots and lots of followers. So basically in this one short sentence -- 14 words long -- has four major factual errors in it. Wow. And he's not done yet. Or they wouldn’t be able to put up any musician’s work, whether they wanted it on the service or not, without having to bear some consequences. That would really change things. Again, YouTube is not the one putting up works. Users of YouTube are. And if and when those people upload a video -- that is not covered by fair use or other user rights -- and it is infringing, then the copyright holder has every right under the DMCA that Taplin misstates earlier to force the video down. And if YouTube doesn't take it down, then they face all the consequences of being an infringer. So what would "really change" if we removed the DMCA's safe harbors? Well, YouTube has already negotiated licenses with basically every record label and publisher at this point. So, basically nothing would change on YouTube. But, you know, for every other platform, they'd be screwed. So, Taplin's plan to "break up" Google... is to lock the company in as the only platform. Great. And this leaves aside the fact (whether we like it or not) that under YouTube's ContentID system which allows copyright holders to "monetize" infringing works has actually opened up a (somewhat strange) new revenue stream for artists, who are now actually profiting greatly from letting people use their works without going through the hassle of negotiating a full license. I also think it would change the whole fake news conversation completely, because, once Facebook or YouTube or Google had to take responsibility for what’s on their services, they would have to be a lot more careful to monitor what goes on there. Again... what? What in the "whole fake news conversation" has anything to do with copyright? This is just utter nonsense. Second, if platforms are suddenly "responsible" for what's on their service, then... Taplin is saying that the very companies he hates, that he thinks are the ruination of culture and society, should be the final arbiters of what speech is okay online. Is that really what he wants? He wants Google and Facebook and YouTube -- three platforms he's spent years attacking -- determining if his own speech is fake news? Really? Because, let's face it, as much as I hate the term, this interview is quintessential fake news. Nearly every sentence Taplin says includes some false statement -- often multiple false statements. And the Boston Globe published it. Should the Boston Globe now be liable for Taplin's deranged understanding of the law? Should we be able to sue the Boston Globe because it published utter nonsense uttered by Jonathan Taplin? Because that's what he's arguing for. Oh, but, I forgot, according to Taplin, the Boston Globe -- as a newspaper -- has no such safe harbor, so it's already fair game. Sue away, people... Wouldn’t that approach subject these services to death by a thousand copyright-infringement lawsuits? It would depend on how it was put into practice. When someone tries to upload pornography to YouTube, an artificial intelligence agent sees a bare breast and shunts it into a separate queue. Then a human looks at it and says, “Well, is this National Geographic, or is this porn?” If it’s National Geographic it probably gets on the service, and if it’s porn it goes in the trash. So, it’s not like they’re not doing this already. It’s just they’ve chosen to filter porn off of Facebook and Google and YouTube but they haven’t chosen to filter ISIS, hate speech, copyrighted material, fake news, that kind of stuff. This is just a business decision on their part. They know every piece of content that’s being uploaded because they used the ID to decide who gets the advertising. So they could do all of this very easily. It’s just they don’t want to do it. First off, finally, the Boston Globe reporter pushes back slightly. Not by correcting any of the many, many false claims that Taplin has made so far, but in highlighting a broader point: that Taplin's solution is completely idiotic and unworkable, because we already see the abuse that the DMCA takedown process gets. But... Taplin goes into spin mode and suggests there's some magic way that this system wouldn't be abused for censorship (even though the existing system is). Then he explains his fantasy-land explanation of how YouTube moderation actually works. He's wrong. This is not how it works. Most content is never viewed by a human. But let's delve in deeper again. Taplin and some of his friends like to point to the automated filtering of porn. But porn is something that is much easier to teach a computer to spot. A naked breast is something you can teach a computer to spot pretty well. Fake news is not. Hate speech is not. Separately, notice that Taplin never ever mentions ContentID in this entire interview? Even though that does the very thing he seems to insist that YouTube refuses to do? ContentID does exactly what he claims this porn filter is doing. But he pretends it doesn't exist and hasn't existed for years. And the Boston Globe just lets it slide. Also, again, Taplin insists that YouTube and Facebook "haven't chosen to filter ISIS" even though both companies have done so for years. How does Taplin not know this? How does the Boston Globe reporter not know this? How does the Boston Globe think that its ignorant reporter should interview this ignorant person? Why did they then decide to publish any of this? Does the Boston Globe employ fact checkers at all? The mind boggles. Meanwhile, we really shouldn't let it slide that Taplin -- when asked specifically about copyright infringement -- seems to argue that if copyright law was changed, it would somehow magically lead Google to stop ISIS videos, hate speech and fake news among other things. None of those things has anything to do with copyright law. Shouldn't he know this? Shouldn't the Boston Globe? As for the second paragraph, it's also utter nonsense. YouTube "knows every piece of content that's being uploaded because they used the ID to decide who gets the advertising." What does that even mean. What is "the ID"? And, even in the cases where YouTube does decide to place ads on videos (again, which is greatly restricted, and is not for all content), the fact that Google's algorithms can try to insert relevant ads does not mean that Google "knows" what's in the content. It just means that an algorithm does some matching. And, sure, Taplin might point out that if they can do that, why can't they also do it for copyright and ISIS and the answer is that THEY DO. That's the whole fucking point. Again, why is the Boston Globe publishing utter nonsense? Is Google trying to forestall this kind of regulation? Ultimately YouTube is already moving towards being a service that pays content providers. They announced last month that they’re going to put up a YouTube music channel. And that will look much more like Spotify than it looks like YouTube. In other words, they will license content from providers, they will charge $10 a month for the service, and you will then get curated lists of music. From the point of view of the artists and the record company, it’ll be a lot better than the system that exists now — where essentially YouTube says to you, your content is going to be on YouTube whether you want it to or not, so check this box if you want us to give you a little bit of the advertising. YouTube has been paying content providers for years. I mean, it's been years since the company announced that in one year alone, it had paid musicians, labels and publishers over a billion dollars. And Taplin claims they're "moving" to such a model? Is he stuck in 2005? And, they already license content from providers. The $10/month thing again, is not new (it's been available for years), but that's a separate service, which is not the same as regular YouTube. And it has nothing to do with any of this. If the DMCA changed, then... that wouldn't have any impact at all on any of this. Still, let's recap the logic here: So YouTube offering a music service, which it set up to compete with Spotify and Apple Music, and which has nothing to do with the regular YouTube platform, will somehow "forestall" taking away the DMCA's safe harbors? How exactly does that work? I mean, wouldn't the logic work the other way? The whole interview is completely laughable. Taplin repeatedly makes claims that don't pass the laugh test for anyone with even the slightest knowledge of the space. And nowhere does the Boston Globe address the multiple outright factual errors. Sure, I can maybe (maybe?) understand not pushing back on Taplin in the moment of the interview. But why let this go to print without having someone (anyone?!?) with even the slightest understanding of the law or how YouTube actually works, check to see if Taplin's claims were based in reality? Is that really so hard? Apparently it is for the Boston Globe and its "deputy editor" Alex Kingsbury. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
The Ultimate Raspberry Pi 3B Starter Kit is perfect for anybody with an interest in STEM projects. You'll get a new Raspberry Pi 3, along with a Sensor Kit that has 37 sensor modules along with instructions for 35 products, allowing you to launch your Raspberry Pi journey. You also get 10+ hours of instruction of how to use it. Great for kids and adults alike, this kit will help you build games, robots, tools, and much more. Use the code RASPBERRYPI10 for an additional 10% off the sale price of $145.99. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Romance novelist Faleena Hopkins recently turned the rest of the genre against her by deciding -- with the USPTO's blessing -- she was the only person who could use the word "cocky" in a book title. Given the nature of romance novels, the striking of the word "cocky" left precious few terms capable of describing a certain blend of bravado and sexual prowess. The backlash was not only immediate, but thorough. Authors hit with cease-and-desist notices posted these to social media. One writer filed a petition with the USPTO to have the recently-acquired trademark invalidated. To top everything off, the Authors Guild of America joined forces with two of the authors Hopkins sued. What Hopkins likely felt would be an easy win in a trademark infringement case is turning into another cautionary tale about questionable IP and heavy-handed enforcement. As The Guardian reports, Hopkins has already been handed a loss in her lawsuit against author Tara Crescent and publicist Jennifer Watson. In the case, heard in a New York court on Friday, judge Alvin Hellerstein described romance readers as “sophisticated purchasers” unlikely to be confused between different authors’ books, found that cocky was a “weak trademark”, and denied Hopkins’s motion for a preliminary injunction and temporary restraining order to stop the publication of books with the word “cocky” in the title. The lawsuit isn't dead, but authors are still free to use the word "cocky" in book titles until everything's settled. The oral arguments [PDF] suggest the judge doesn't find Hopkins trademark arguments persuasive. Instead, the judge points out any restraining order would cause damage to the people Hopkins is suing, far outweighing anything Hopkins might suffer if other "cocky" products remain on the market. [I]t seems to me that defendant, who is on the market with her romance novels, if restrained, would also suffer damage and it would be irreparable. If a book is taken off the market, it can't be sold. Books of this nature have to do with timeliness as well. So I can't say that there is any balance here. If there is, it is likely to tip in defendants' favor because a good portion of injury by the plaintiff would be compensable in damages and captured profits. So that factor is in favor of defendant. Whether an injunction is in the public interest, given the way these trademarks are used, I don't think there is much of a public interest in them. Here plaintiff can't demonstrate that its trademark merits protection, nor in my opinion that defendant's use of a similar mark is likely to cause consumer confusion. Those are the eight factors that we just talked about. Accordingly, the motion for a TRO and for a preliminary injunction is denied. And there's this, which has nothing to say about the merits of the case, but does provide a brief glimpse of the intersection of the court of public opinion and the US federal court system. THE COURT: You present in your papers about a dozen instances of prior use of "Cocky" in a title: Bite Me Cocky; A Little Bit Cocky; The Cocky Cowboy; Cocky Balls Boa, described as an erotic parody; Cocky Cowboys; Cocky SWATS; Cocky: A Stepbrother Romance; Cocky: A Cowboy Stepbrother Romance; and so on. MR. REUBER: Your Honor, if I may? THE COURT: No. You are out of the case. MR. REUBER: I understand, your Honor. But I penned the brief, and there is an error that my client alerted me to this morning in the brief. Specifically, it is first one you just read, Bite Me Cocky, published in 2012. He has learned that that title may have changed as a result of the Cockygate sort of disputes. It might have been originally published as Bite Me and not Bite Me Cocky. I just wanted to point that out. THE COURT: Originally Bite Me, then it became Bite Me Cocky? MR. REUBER: Yes, your Honor. That was our understanding. THE COURT: What is the explanation for the change? MR. REUBER: As a protest, effectively. That is our best guess. THE COURT: In response to the protest, he added the word "Cocky"? MR. REUBER: In response to Cockygate registrations, yes, we believe the author added the word "Cocky" as a protest. That is pure supposition on our part, your Honor. We have only been doing this for about 48 hours. The challenge of the trademark continues, as is noted at the USPTO website. If everything continues down this road, Faleena Hopkins won't have any trademarks to bully people with, much less a lawsuit victory to justify her bullying behavior. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
By now it's abundantly clear that the Trump FCC's repeal of net neutrality was based largely on fluff and nonsense. From easily disproved claims that net neutrality protections stifled broadband investment, to claims that the rules would embolden dictators in North Korea and Iran, truth was an early and frequent casualty of the FCC's blatant effort to pander to some of the least competitive, least-liked companies in America (oh hi Comcast, didn't see you standing there). In fact throughout the repeal, the FCC's media relations office frequently just directed reporters to telecom lobbyists should they have any pesky questions. With the rules now passed and a court battle looming, FCC boss Ajit Pai has been making the rounds continuing his postmortem assault on stubborn facts. Like over at CNET, for example, where Ajit Pai informs readers in an editorial that he really adores a "free and open internet" despite having just killed rules supporting that very concept: "I support a free and open internet. The internet should be an open platform where you are free to go where you want, and say and do what you want, without having to ask anyone's permission. And under the Federal Communications Commission's Restoring Internet Freedom Order, which takes effect Monday, the internet will be just such an open platform. Our framework will protect consumers and promote better, faster internet access and more competition." 'Course if you've paid attention, you know the FCC's remaining oversight framework does nothing of the sort, and is effectively little more than flimsy, voluntary commitments and pinky swears by ISPs that they promise to play nice with competitors. With limited competition, FCC regulatory oversight neutered, the FTC an ill-suited replacement, and ISPs threatening to sue states that try to stand up for consumers, there's not much left intact that can keep incumbent monopoly providers on their best behavior (barring the looming lawsuits and potential reversal of the rules). Over in an interview with NPR, Pai again doubles down on repeated falsehoods, including a new claim that the repeal somehow had broad public support: NPR....this is not a popular decision. Millions of people have written in opposition to it. Public opinion polling shows most Americans favor net neutrality, not your open internet rule. And I wonder why you're doing this then? If public opinion is against you, what are you doing? Pai: First of all, public opinion is not against us. If you look at some of the polls — NPR: No, it is, sir, come on. Pai: If you look at some of the polling, if you dig down and see how these polls were constructed, it was clearly designed to reach a particular result. But even beyond that — NPR: It's not just one, there are many surveys, sir. Pai: The FCC’s job is not to put a finger in the wind and decide which way the winds are blowing, it's to look at the facts and make a sober judgment based on what the law is. And that is exactly what we've done here. Moreover, the long-term interest is in building better, faster, cheaper internet access. That is what consumers say when I travel around the country, and I’ve have spoken to consumers in Los Angeles to the reservation in South Dakota, places like Dahlonega, Georgia. That is what is on consumers’ minds. That is what this regulatory framework is going to deliver. First Pai tries to claim that the public supported his repeal, then when pressed tries to claim that the polls that were conducted were somehow flawed. Neither is true. In fact, one recent survey out of the University of Maryland found that 82% of Republicans and 90% of Democrats opposed the FCC's obnoxiously-named "restoring internet freedom" repeal. Pai then tries to sell the interviewer on the implication that consumers simply aren't smart or informed enough to realize that gutting oversight of indisputably terrible companies like Comcast will somehow be secretly good for them. Whether Pai's repeated lies result in anything vaguely resembling accountability remains to be seen. But based on the volume of time Pai spends touring flyover country, it's pretty clear he's harboring some significant post-FCC political aspirations. Those ambitions are likely to run face first into very real voters (especially of the Millennial variety) harboring some very real annoyance at his gutting of a healthy and open internet. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
We've pointed this out over and over again with regards to all of the various attempts to "regulate" the internet giants of Google and Facebook: nearly every proposal put forth to date creates a regulatory regime that Google and Facebook can totally handle. Sure, they might find it to be a nuisance, but its well within the resources of both companies to handle whatever is thrown their way. However, most other companies are then totally fucked, because they simply cannot comply in any reasonable manner. And, yet, these proposals keep coming -- and people keep celebrating them in the false belief that they will somehow "contain" the two internet giants, when the reality is that it will lock them in as the defacto dominant internet players, making it nearly impossible for upstarts and competitors to enter the market. This seems particularly bizarre when we're talking about the EU's approach to copyright. As we've been discussing over the past few weeks, the EU Parliaments Legal Affairs Committee is about to vote on the EU Copyright Directive, that has some truly awful provisions in it -- including Article 11's link tax and Article 13's mandatory filters. The rhetoric around both of these tends to focus on just how unfair it is that Google and Facebook have so much power, and are making so much money while legacy companies (news publishers for Article 11 and recording companies for Article 13) aren't making as much as they used to. But, as more and more people are starting to point out, if the Copyright Directive moves forward as is, it will only serve to lock in those two companies as the controllers of the internet. So why is it that the European Parliament seems hellbent on handing the internet over to American internet companies? In the link above, Cory Doctorow tries to parse out what the hell they're thinking: These proposals will make starting new internet companies effectively impossible -- Google, Facebook, Twitter, Apple, and the other US giants will be able to negotiate favourable rates and build out the infrastructure to comply with these proposals, but no one else will. The EU's regional tech success stories -- say Seznam.cz, a successful Czech search competitor to Google -- don't have $60-100,000,000 lying around to build out their filters, and lack the leverage to extract favorable linking licenses from news sites. If Articles 11 and 13 pass, American companies will be in charge of Europe's conversations, deciding which photos and tweets and videos can be seen by the public, and who may speak. In a (possibly paywalled) article over at Wired looking at the Copyright Directive, Docotorow is also quoted explaining just how massively this system will be abused for censorship of EU citizens: "Because the directive does not provide penalties for abuse – and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public display – it will be trivial to claim copyright over key works at key moments or use bots to claim copyrights on whole corpuses. The nature of automated systems, particularly if powerful rightsholders insist that they default to initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or, more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare. "Making these claims will be MUCH easier than sorting them out – bots can use cloud providers all over the world to file claims, while companies like Automattic (Wordpress) or Twitter, or even projects like Wikipedia, would have to marshall vast armies to sort through the claims and remove the bad ones – and if they get it wrong and remove a legit copyright claim, they face unbelievable copyright liability." As we noted yesterday in highlighting a new paper looking at what happened when similar laws were implemented, the increase in censorship is not an idle threat or crying wolf. It happens. Frequently. And, yet, we still have EU politicians and supporters of the Copyright Directive -- while they complain about Google and Facebook's power over the internet -- turning around and pushing for plans that not only will lock in both of those companies as the dominant internet companies, but also forcing upon them the sole power to censor the speech of EU citizens. And they're about to vote on this in just hours and don't seem to have the first clue about what a dumb idea all of this is. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
It seems incredible, but the TPP trade deal is still staggering on, zombie-like. It's official name is now the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), but even the Australian government just calls it TPP-11. The "11" refers to the fact that TPP originally involved 12 nations, but the US pulled out after Donald Trump's election. The Australian Senate Standing Committee on Foreign Affairs, Defence & Trade is currently conducting an inquiry into TPP-11 as a step towards ratification by Australia. However, in its submission to the committee (pdf), Open Source Industry Australia (OSIA) warns that provisions in TPP-11's Electronic Commerce Chapter "have the potential to destroy the Australian free & open source software (FOSS) sector altogether", and calls on the Australian government not to ratify the deal. The problem lies in Article 14.17 of the TPP-11 text (pdf): No Party shall require the transfer of, or access to, source code of software owned by a person of another Party, as a condition for the import, distribution, sale or use of such software, or of products containing such software, in its territory. In its submission to the committee, the OSIA writes: Article 14.17 of CPTPP prohibits requirements for transfer or access to the source code of computer software. Whilst it does contain some exceptions, those are very narrow and appear rather carelessly worded in places. The exception that has OSIA up in arms covers "the inclusion of terms and conditions related to the provision of source code in commercially negotiated contracts". If Australia ratifies CPTPP, much will turn on whether the Courts interpret the term "commercially negotiated contracts" as including FOSS licences all the time, some of the time or none of the time. If the Australian courts rule that open source licenses are not "commercially negotiated contracts", those licences will no longer be enforceable in Australia, and free software as we know it will probably no longer exist there. Even if the courts rule that free software licenses are indeed "commercially negotiated contracts", there is another problem, the OSIA says: The wording of Art. 14.17 makes it unclear whether authors could still seek injunctions to enforce compliance with licence terms requiring transfer of source code in cases where their copyright has been infringed. Without the ability to enforce compliance through the use of injunctions, open source licenses would once again be pointless. Although the OSIA is concerned about free software in Australia, the same logic would apply to any TPP-11 country. It would also impact other nations that joined the Pacific pact later, as the UK is considering (the UK government seems not to have heard of the gravity theory for trade). It would presumably apply to the US if it did indeed rejoin the pact, as has been mooted. In other words, the impact of this section on open source globally could be significant. It's worth remembering why this particular article is present in TPP. It grew out of concerns that nations like China and Russia were demanding access to source code as a pre-requisite of allowing Western software companies to operate in their countries. Article 14.17 was designed as a bulwark against such demands. It's unlikely that it was intended to destroy open source licensing too, although some spotted early on that this was a risk. And doubtless a few big software companies will be only too happy to see free software undermined in this way. Unfortunately, it's probably too much to hope that the Australian Senate Standing Committee on Foreign Affairs, Defence & Trade will care about or even understand this subtle software licensing issue. The fate of free software in Australia will therefore depend on whether TPP-11 comes into force, and if so, what judges think Article 14.17 means. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Jason Smith, over at Indivigital has been doing quite a job of late in highlighting the hypocrisy of European lawmakers screaming at internet companies over their privacy practices, while doing little on their own websites of what they're demanding of the companies. He pointed out the EU Commission itself appeared to be violating the GDPR, leading it to claim that it was exempt. And now he's got a new story up, pointing out that the website of UK Parliament member, Damian Collins, who is the chair of the Digital, Culture, Media and Sport Committee... does not appear to have a privacy policy in place, even though he took the lead in quizzing Facebook about its own privacy practices and its lack of transparency on how it treats user data. Now, there are those of us who believe that privacy policies are a dumb idea that don't do anything to protect people's privacy -- but if you're going to be grandstanding about how Facebook is not transparent enough about how it handles user data, it seems like you should be a bit transparent yourself. Smith's article details how many other members of the Digital, Culture, Media and Sport Committee don't seem to be living up to their own standards. They may have been attacking social media sites... but were happy to include tracking widgets from those very same social media sites on their own sites. Julie4Sunderland.co.uk is maintained on behalf of Julie Elliott MP, a fellow member of the Digital, Culture, Media and Sport Committee. It serves third-party content from Facebook and upwards of 18 cookies on visitor’s computers. Likewise, websites of fellow members Jo Stevens, Simon Hart, Julian Knight, Ian Lucas, Rebecca Pow and Giles Watling are also collecting data on behalf of the social networking giant from their visitors. The websites of Julian Knight, Ian Lucas, Giles Watling and Rebecca Pow also collect data on visitors for Twitter. Meanwhile, Rebecca Pow’s website sets third-party cookies from YouTube.com. Damian Collins’s website features a cookie message however the link in the message takes the user to a contact page that contains a form that requests the user’s name and email address. The page on which the form resides contains a link that activates a modal window and encourages the user to sign-up for Damian Collins’s email newsletter. Moreover, the Parliamentary page for the Digital, Culture, Media and Sport committee is also setting and serving third-party cookies and content from Twitter. Now, you can reasonably argue that the websites of politicians aren't the same as a social media giant used by like half of the entire world. And there is a point there. But it's also worth noting that it's amazing how accusatory politicians and others get towards social media sites when they don't seem to live up to the same standards on their own websites. Maybe Facebook should do better -- but the very actions of these UK Parliament members, at the very least, suggests that even they recognize what they're demanding of Facebook is more cosmetic "privacy theater" than anything serious. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Section 230 of the CDA gave us the internet we know today. It has allowed hundreds of tech companies and dozens of social media networks to flourish. To some people, however, Section 230 immunity is the internet's villain, not its hero. Recent legislation has created some damaging holes in this essential protection, but it's still insular enough to fend off most legal action in which plaintiffs choose to sue service providers rather than the end user who did/said whatever the plaintiff finds tortiously offensive. Similar to what has been argued in multiple piracy-related lawsuits, the plaintiff in this lawsuit filed against Snapchat alleged one of the company's photo filters encouraged users to break the law. This lawbreaking had particularly tragic consequences. Christal McGee allegedly drove recklessly (over 100 mph) to capture her accomplishment in Snapchat’s speed filter. McGee’s car hit Maynard’s car and caused permanent brain damage to someone in the car. This is where Snapchat comes in. It wasn't that the driver was just using the service when the accident occurred. It's that the driver was using a certain filter when she hit the other vehicle. The Maynards sued Snapchat, alleging “Snapchat knew that its users could ‘use its service in a manner that might distract them from obeying traffic or safety laws.’ Further, the Maynards allege that Snapchat’s Speed Filter ‘encourages’ dangerous speeding and that the Speed Filter ‘facilitated McGee’s excessive speeding[,]’ which resulted in the crash.” As Eric Goldman points out, the lawsuit doesn't allege McGee posted a photo using the Speed Filter. It's simply enough the filter existed, according to their arguments. The lower court rejected this argument, granting Snapchat's Section 230 motion to dismiss. The state appeals court, however, has more sympathy for the Maynards' argument. The plaintiffs aren't trying to hold Snapchat liable for any photo McGee was trying to create at the time of the crash. (Testimony from McGee's passenger says McGee was "trying to get the car to 100 m.p.h." and had the app open on her phone, which was aimed at her speedometer.) Instead, the Maynards want Snapchat to be legally liable simply for creating a filter that might encourage users to take photos of themselves speeding. The appeals court decides [PDF] that this isn't actually a Section 230 case since the Maynards aren't attempting to hold Snapchat accountable for user-generated content. Instead, it points out Section 230 does not immunize service providers from being held liable for software features they themselves create. Snapchat argued that if it's not a Section 230 case, it should still be dismissed because the Maynards' complaint fails for other reasons. The appeals court disagrees: Although Snapchat contends that this Court should affirm the trial court’s grant of its motion to dismiss under the right for any reason rationale, because the Maynards allegedly did not properly state negligence claims against Snapchat and that the court lacked personal jurisdiction over Snapchat, these issues were not decided by the trial court below. Back to the trial court it goes to hear arguments about the points Snapchat raised, but did not fully address in its Section 230-based motion to dismiss. Eric Goldman disagrees with the appeals court's assessment of the Section 230 issue. First, even if she hadn’t completed the publication, McGee allegedly was preparing the speed filter-motivated content for publication. If she had been generating the speed filter only for her personal bemusement, without any plan or ability to share the content with her audience, then I can see why the claim wouldn’t treat Snapchat as the publisher/speaker of her content. But here, McGee’s creation of the speed filter video only makes sense as a preparatory step towards sharing the video with third parties, and I would extend Section 230’s coverage to preparatory steps in addition to the actual publication of content. Second, as a practical matter, the complaint will probably fail on prima facie grounds–similar to how the promissory estoppel and failure-to-warn workarounds to Section 230 are not very significant because the plaintiffs usually can’t win those claims on the merits. Though the accident was a terrible tragedy, the odds are good that Snapchat’s role in the accident isn’t covered by the applicable torts. So now the case will consume more litigation cycles only to end up in the same place. One of Section 230’s strengths is moving such cases out of the court system early when they relate to publishing third party content. The second part may seem cold-hearted but there's not much to like about racking up legal fees just to lose on other issues rather than Section 230 immunity. While the plaintiffs may have a point that Snapchat's Speed Filter (which has since been removed) possibly encouraged lawless and dangerous actions, the app had no power to actually force users to drive recklessly while using the app. It's a bit disingenuous to place all the blame on the end users. It was a very stupid addition by Snapchat. But the driver who caused the accident is at fault, not the filter Snapchat created. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
It's been a long tradition here on Techdirt to show examples of politicians and political parties pushing for stricter, more draconian, copyright laws are often found violating those same laws. But the French Rassemblemant National (National Rally Point) party is taking this to new levels -- whining about the enforcement of internet filters, just as it's about to vote in favor of making such filters mandatory. Leaving aside that Rassemblemant National, which is the party headed by Marine Le Pen, is highly controversial, and was formerly known as Front National, it is still an extremely popular political party in France. And, boy, is it ever pissed off that YouTube took down its YouTube channel over automatically generated copyright strikes. Le Pen is particularly angry that YouTube's automatic filters were unable to recognize that they were just quoting other works: Marine Le Pen was quoted as saying, “This measure is completely false; we can easily assert a right of quotation [to illustrate why the material was well within the law to broadcast]”. Yes, but that's the nature of automated filters. They cannot tell what is "fair use" or what kinds of use are acceptable for commentary or criticism. They can just tell "was this work used?" and if so "take it down." Given all that, and the fact that Le Pen complained that this was "arbitrary, political and unilateral," you have to think that her party is against the EU Copyright Directive proposal, which includes Article 13, which would make such algorithmic filters mandatory. Except... no. Within the EU Parliament, Rassemblemant National is in a coalition with a bunch of other anti-EU parties known as Europe of Nations and Freedoms or ENF. And how does ENF feel about Article 13? MEP Julia Reda has a handy dandy chart showing that ENF is very much in favor of Article 13 (and the Article 11 link tax). So... we have a major political party in the EU, whose own YouTube channel has been shut down thanks to automated copyright filters in the form of YouTube's ContentID. And that party is complaining that ContentID, which is the most expensive and the most sophisticated of all the copyright filters out there, was unable to recognize that they were legally "quoting" another work... and their response is to order every other internet platform to install their own filters. Really? Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Stanford's Daphne Keller is one of the world's foremost experts on intermediary liability protections and someone we've mentioned on the website many times in the past (and have had her on the podcast a few times as well). She's just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all. That's a silly idea that no one who has any understanding of these issues thinks is a reasonable idea. The concern is that as many people (including regulators) keep pushing to pin liability on internet companies for the activities of their users, it creates some pretty damaging side effects. Specifically, the paper details how it harms speech, makes us less safe, and harms the innovation economy. It's actually kind of hard to see what the benefit side is on this particular cost-benefit equation. As the paper notes, it's quite notable how the demands from people about what platforms should do keeps changing. People keep demanding that certain content gets removed, while others freak out that too much content is being removed. And sometimes it's the same people (they want the "bad" stuff -- i.e., stuff they don't like -- removed, but get really angry when the stuff they do like is removed). Perhaps even more importantly, the issues for why certain content may get taken down are the same issues that often involve long and complex court cases, with lots of nuance and detailed arguments going back and forth. And yet, many people seem to think that private companies are somehow equipped to credibly replicate that entire judicial process, without the time, knowledge or resources to do so: As a society, we are far from consensus about legal or social speech rules. There are still enough novel and disputed questions surrounding even long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, we cannot reasonably expect private platforms to do much better. However they interpret the law, and whatever other ethical rules they set, the outcome will be wrong by many people’s standards. Keller then looked at a variety of examples involving intermediary liability to see what the evidence says would happen if we legally delegate private internet platforms into the role of speech police. It doesn't look good. Free speech will suffer greatly: The first cost of strict platform removal obligations is to internet users’ free expression rights. We should expect over-removal to be increasingly common under laws that ratchet up platforms’ incentives to err on the side of taking things down. Germany’s new NetzDG law, for example, threatens platforms with fines of up to &euro'50 million for failure to remove “obviously” unlawful content within twenty-four hours’ notice. This has already led to embarrassing mistakes. Twitter suspended a German satirical magazine for mocking a politician, and Facebook took down a photo of a bikini top artfully draped over a double speed bump sign.11 We cannot know what other unnecessary deletions have passed unnoticed. From there, the paper explores the issue of security. Attempts to stifle terrorists' use of online services by pressuring platforms to remove terrorist content may seem like a good idea (assuming we agree that terrorism is bad), but the actual impact goes way beyond just having certain content removed. And the paper looks at what the real world impact of these programs have been in the realm of trying to "counter violent extremism." The second cost I will discuss is to security. Online content removal is only one of many tools experts have identified for fighting terrorism. Singular focus on the internet, and overreliance on content purges as tools against real-world violence, may miss out on or even undermine other interventions and policing efforts. The cost-benefit analysis behind CVE campaigns holds that we must accept certain downsides because the upside—preventing terrorist attacks—is so crucial. I will argue that the upsides of these campaigns are unclear at best, and their downsides are significant. Over-removal drives extremists into echo chambers in darker corners of the internet, chills important public conversations, and may silence moderate voices. It also builds mistrust and anger among entire communities. Platforms straining to go “faster and further” in taking down Islamist extremist content in particular will systematically and unfairly burden innocent internet users who happened to be speaking Arabic, discussing Middle Eastern politics, or talking about Islam. Such policies add fuel to existing frustrations with governments that enforce these policies, or platforms that appear to act as state proxies. Lawmakers engaged in serious calculations about ways to counter real-world violence—not just online speech—need to factor in these unintended consequences if they are to set wise policies. Finally, the paper looks at the impact on innovation and the economy and, again, notes that putting liability on platforms for user speech can have profound negative impacts. The third cost is to the economy. There is a reason why the technology-driven economic boom of recent decades happened in the United States. As publications with titles like “How Law Made Silicon Valley” point out, our platform liability laws had a lot to do with it. These laws also affect the economic health of ordinary businesses that find customers through internet platforms—which, in the age of Yelp, Grubhub, and eBay, could be almost any business. Small commercial operations are especially vulnerable when intermediary liability laws encourage over-removal, because unscrupulous rivals routinely misuse notice and takedown to target their competitors. The entire paper weighs in at a neat 44 pages and it's chock full of useful information and analysis on this very important question. It should be required reading for anyone who thinks that there are easy answers to the question of what to do about "bad" content online, and it highlights that we actually have a lot of data and evidence to answer the questions that many legislators seem to be regulating based on how they "think" the world would work, rather than how the world actually works. Current attitudes toward intermediary liability, particularly in Europe, verge on “regulate first, ask questions later.” I have suggested here that some of the most important questions that should inform policy in this area already have answers. We have twenty years of experience to tell us how intermediary liability laws affect, not just platforms themselves, but the general public that relies on them. We also have valuable analysis and sources of law from pre-internet sources, like the Supreme Court bookstore cases. The internet raises new issues in many areas—from competition to privacy to free expression—but none are as novel as we are sometimes told. Lawmakers and courts are not drafting on a blank slate for any of them. Demands for platforms to get rid of all content in a particular category, such as “extremism,” do not translate to meaningful policy making—unless the policy is a shotgun approach to online speech, taking down the good with the bad. To “go further and faster” in eliminating prohibited material, platforms can only adopt actual standards (more or less clear, and more or less speech-protective) about the content they will allow, and establish procedures (more or less fair to users, and more or less cumbersome for companies) for enforcing them. On internet speech platforms, just like anywhere else, only implementable things happen. To make sound policy, we must take account of what real-world implementation will look like. This includes being realistic about the capabilities of technical filters and about the motivations and likely choices of platforms that review user content under threat of liability. This is an important contribution to the discussion, and highly recommended. Go check it out. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
The Complete Adobe CC Training Bundle gives you access to 7 courses covering Adobe Creative Cloud products such as Photoshop, Premiere Pro, InDesign, Illustrator, and more. The hands on lessons will train you in photo retouching, poster design, digital art, motion media, and more. With more than 60 hours and 200 tutorials, this $29 bundle will help you become an Adobe master. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
FBI Deputy Director Andrew McCabe's career came to a sudden end earlier this year. Following in his predecessor James Comey's footsteps, McCabe swiftly found himself on the front sidewalk with a Sessions footprint on his ass. An Inspector General's report followed soon after, detailing many reasons McCabe might have been fired -- lying to investigators, leaking stuff to the press, evading concerns about his investigative neutrality in light of his wife's acceptance of donations from a Clinton-linked PAC... We don't know if any of these are why Trump fired McCabe, but pretty much any one of these things makes a firing justifiable. Lying to the FBI is serious business, even when it's just its oversight. Ask anyone who's been charged with nothing but lying when the FBI fails to build a better case. For McCabe, though, it was just a little "administrative misconduct." Something that could be addressed with a writeup or, in this case, a firing. That the trigger was pulled hours away from McCabe's retirement sucks for McCabe, but I find it very difficult to sympathize with career government employees who feel they're still owed a lifetime of retirement benefits after they've been fired for cause. McCabe is still trying to get what he thinks taxpayers owe him. He claims the firing was "politically motivated." Given the general nature of Trump's personnel decisions, he's probably not wrong. But the IG report shows him engaged in behavior that could result in termination. McCabe doesn't believe that's the case and he's demanding the DOJ hand over documents and manuals related to internal policies and firing practices. And he's doing this like an actual civilian: by filing FOIA requests. Unsurprisingly, that's not working. McCabe's lawyers are asking the DC court to force the DOJ to hand over all policies and manuals. As is argued in this quasi-FOIA lawsuit [PDF], the DOJ has been shirking its obligations to the public for decades. Defendants have been required for over 50 years to proactively disclose the kinds of documents at issue here, and there is no just reason for either their failure to do so now or for any further delay. Defendants’ breach of their disclosure obligations have prejudiced Mr. McCabe and Plaintiff in fundamental ways, all of which flow from one of FOIA’s core concerns: No citizen should “los[e] a controversy with an agency because of some obscure and hidden [administrative material] which the agency knows about but which has been unavailable to the citizen simply because he had no way in which to discover it.” His FOIA request was only a few days old at the time of the filing, so this lawsuit isn't really about non-responsiveness. It's about the DOJ deliberately playing keep-away with documents McCabe needs to determine whether or not his firing was done in accordance with DOJ policy. This cannot possibly come as a surprise to McCabe. A career fed would know federal agencies don't turn over documents without a fight, even when their legal obligations are clear. The FBI is barely responsive to its own oversight, so there's no reason to believe the DOJ is going to proactively post documents for public consumption. And when it's facing a potential lawsuit over a firing, it's definitely going to amp up the stonewalling and denials. McCabe probably wouldn't have minded Joe Citizen being dicked around this way, but it irritates him when he's on the receiving end of treatment like this: FOIA mandates that Defendants proactively disclose the applicable policies and procedures in an electronic format without waiting for an affirmative request. Defendants have failed to do so. When Plaintiff requested the pertinent documents, Defendants variously refused to comply and failed to properly, timely, or sufficiently respond. They even barred Plaintiff from accessing Defendants’ physical library, which contains some (or perhaps all) of the documents at issue here. When you're forced out of government service, you suddenly become keenly aware of the injustices -- large and small -- perpetrated daily by federal agencies. For someone who used to be near the top of the fed food chain, this pettiness and opacity must be almost unbearable. When you're on the inside, it just looks like a measured response to stupid members of the public who won't mind their own business. But once you're on the outside looking in, you realize how much effort you must make just to force government agencies to comply with federal law and their own internal policies. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
So you might recall that part of the Telecommunications Act of 1996 was the concept of line sharing, or local loop unbundling. Simply, the rules set forth by that law required that incumbent telcos needed to share their networks with smaller competitors, providing wholesale access to bandwidth. It was an effort to foster something vaguely resembling competition in the broadband space by letting smaller companies piggyback on existing network infrastructure. The thought was that because the barriers to market entry were so high, this could help smaller competitors gain footholds that would otherwise be impossible. Unsurprisingly incumbent telcos utterly loathed this idea, and quickly got to work dismantling it. First by ensuring that the coordination between incumbent telcos (ILECs) and smaller competitors (CLECs) was as clunky, cumbersome and annoying as possible (something you probably noticed if you ever waited for installs from one of these smaller ISPs in the late 90s or early aughts), then by lobbying to have the rules dismantled. Incumbent telcos then used the resulting failure as evidence that the idea was doomed from the start, despite the fact we never truly gave it a chance. The idea of opening incumbent networks to competitors is pretty common in some parts of the world. France for example managed to take the same concept and made it work quite successfully in cities like Paris, where to this day users can get TV, phone, and 100-500 Mbps broadband connections for a tiny fraction of what American consumers pay ($40 to $50 or so). A variation on this theme is open access, where multiple ISPs come in and compete over a core (sometimes government co-run) network; an idea that works well here and abroad, but also sees fierce incumbent ISP opposition for obvious reasons. It's a battle incumbent telcos won handily thanks to lobbying power, but there remains a few lingering rules they're now trying to eliminate. As such, they've been petitioning the Ajit Pai FCC to eliminate the remainder of these rules. In a blog post by telco lobbying organization US Telecom, telcos argue that the rules are no longer necessary, and (much like their attacks on net neutrality) argue that eliminating them will drive "innovation and investment": "This month, USTelecom is petitioning the FCC for nationwide forbearance from rules created in 1996 that no longer make sense in today’s marketplace. Specifically, the petition focuses on unbundling obligations, which require some ILECs (incumbent local exchange carriers, a.k.a. local telephone companies) to sell access to parts of their networks to certain competitors at extremely low rates set by regulators. These outdated rules distort competition and investment decisions. When outdated and overly restrictive regulations are rolled back, innovation and investment thrives. And for over two decades, the broadband industry has transformed how the world communicates under a light-touch regulatory structure that spurred over one and a half trillion dollars in private investment." Unsurprisingly the smaller companies that are still using those lines don't see the rules in the same light. Independent California ISP Sonic, for example, is one of the last surviving independent ISPs from that era. Sonic argues that they still need access to these networks as they slowly work to build out a fiber network of their own, and gutting the remaining rules is simply an attempt to further constrain what passes for competition in American broadband markets: "Sonic is fully engaged in the process of building fiber to customers in a number of markets around Northern California, but this represents a serious impediment to our ability to deploy fiber," says Sonic CEO Dane Jasper of the group's petition. "The 1996 Telecommunications Act created competition in the telephone and broadband marketplace by requiring incumbents to unbundle essential last mile facilities, primarily copper wires that go to premises," Jasper says. "Serving customers on these facilities is an essential step toward fiber deployment, which Sonic is actively engaged in." "However, we cannot lose access to copper in the meantime," Jasper said. "The cut-off of unbundled network elements as contemplated by US Telecom is an audacious attempt at limiting new fiber deployment by competitive carriers including Sonic, and it would directly harm hundreds of thousands of California consumers and businesses who we currently provide high-speed services using UNE copper facilities and backhaul." The EFF also has a good blog post explaining why these rules are still important: "While copper wire infrastructure may strike people as the infrastructure of yesterday, its existence and the legal rights to access it remain essential for competitive entry into the high-speed broadband market. This is because it is one of the only remaining ways a new company can gain customers to then leverage to finance fiber optic deployment. Should the FCC grant the petition, the growing monopolization of high-speed broadband above 25 Mbps where more than half of Americans have only one choice will likely become worse." And while this isn't a story that's going to get much (any) real attention in the midst of so many other pressing issues, it's important all the same. Especially for a broadband market that's actually getting less competitive than ever thanks to these same telcos routinely refusing to upgrade their own networks, providing giant cable companies like Comcast a growing monopoly over an already uncompetitive broadband sector. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
What's up Europe? We've been talking a lot about insanity around the new copyright directive, but the EU already has some pretty messed up copyright/related rights laws on the books that are creating absurd situations. The following is one of them. One area where US and EU laws differ is on the concept of the "database right." The US does not grant a separate copyright on a collection of facts. The EU does. Studies have shown how this is horrible idea, and if you compare certain database-driven industries in the US and the EU, you discover how much damage database rights do to innovation, competition and the public. But, alas, they still exist. And they continue to be used in positively insane ways. Enter H&‌aring;kon Wium Lie. You might know him as basically the father of Cascading Style Sheets (CSS). Or the former CTO of the Opera browser. Or maybe even as the founder of the Pirate Party in Norway. Either way, he's been around a while in this space, and knows what he's talking about. Via Boing Boing we learn that: (1) Wium Lie has been sued for a completely absurd reason of (2) helping a site publish public domain court rulings that (3) are not even protected by a database right and (4) the judge ruled in favor of the plaintiff (5) in 24 hours (6) before Lie could respond and (7) ordered him to pay the legal fees of the other side. I've numbered these because I had to break out each absurd part separately just to start to try to comprehend just how ridiculous the whole thing is. And now, let's go through how each part is absurd in turn: 1. Wium Lie is being sued as an accomplice to the site rettspraksis.no by an operation called Lovdata. Wium Lie tells the entire history in his post, but way back in the early days of the web, while he was helping to create CSS, Wium Lie also helped put Norway's (public domain) laws online. At the time, that same company, Lovdata, was charging people $1-per-minute to access the laws. Really. Eventually, Lovdata dropped the fees and is the official free publishers of the laws in Norway. Of course, statutory law is just one part of "the law." Case law is also quite important and (thankfully) court orders (that make up the bulk of case law) are also in the public domain in Norway. However, Lovdata charges an absurd $1,500 per year to access those decisions. And, it claims a database right* on the collection it makes available online. 2. And yet, Wium Lie is still being sued. Why? When he saw that the website rettspraksis.no was trying to collect and publish these decisions, he borrowed Lovdata CD-ROMs from the National Library in Oslo. He borrowed the 2002 version of the CD-ROM. This date is important, because the EU's database rights last for... 15 years. 2002 databases (and, yes, Wium Lie points out that it's odd to call a stack of documents a database...) are no longer protected by the database rights. 3. So, yeah, the data is clearly in the public domain, and Wium Lie didn't violate anyone's copyright or database rights. Wium Lie notes that Lovdata didn't even try to contact him or rettspraksis.no before suing, but just told the court that they must be scraping the expensive online database: I'm very surprised that Lovdata didn't contact us to ask us where we had copied the court decisions from. In the lawsuit, they speculate that we have siphoned their servers by using automated «crawlers». And, since their surveillance systems for detecting siphoning were not triggered, our crawlers must have been running for a very long time, in breach of the database directive. The correct answer is that we copied the court decisions from the old discs I found in the National Library. We would have told them this immediately if they had simply asked. 4. This is the most perplexing to me in all of this. I can't read the Norwegian verdict (which, for Lovdata's lawyers, I did not get from scraping your site!), and don't know enough about Norwegian law, but this seems positively bizarre to me. It seems to go against fundamental concepts of basic due process, but how could a judge come out with a verdict like this? 5. ?!?>#[email protected]!%#!%[email protected]!%!#%!! 6. Again: is this how due process works in Norway? In the US, of course, there are things like preliminary injunctions that might be granted pretty quickly, but even then -- especially when it comes to gagging speech, there is supposed to be at least some element of due process. Here there appears to have been something close to none. Furthermore, in the US, this kind of thing would only be allowed if one side could show irreversible harm from leaving the site up. It is difficult to see how anyone could legitimately argue irreversible harm for publishing the country's own (public domain) court rulings. I find it shocking that the judge ordered the take down of our website, rettspraksis.no, within 24 hours of the lawsuit being filed and WITHOUT HEARING ARGUMENTS FROM US. (Sorry for switching to CAPS, but this is really important.) We were ready and available to bring forth our arguments but were never given the chance. Furthermore, upon learning of the lawsuit, we, as a precaution, had voluntarily removed our site. If the judge had bothered to check he would have seen that what he was ordering was already done. There should be a much higher threshold for judges to close websites that just the request of some organization. 7. And, even if this was the equivalent of an injunction, to also tell Wium Lie and rettspraksis.no that they need to pay Lovdata's legal fees is just perplexing. the two of us, the volunteers, were slapped with a $12,000 fee to cover the fees of Lovdata's own lawyer, Jon Wessel-Aas. So, the judge actually ordered that we had to pay the lawyer from the opposite side, WITHOUT HAVING BEEN GIVEN A CHANCE TO ARGUE OUR CASE. This whole situation is infuriating. Being sued is a horrible experience in the first place. But the details here pile absurd upon preposterous upon infuriating. The whole database rights concept is already a troublesome thing, but this application of it is positively monstrous. Wium Lie now has some good lawyers working for him, and hopefully this whole travesty will get overturned, but what a clusterfuck. * A separate tangent that I'll just note here rather than cluttering up all of the above. I was a bit confused to read references to the EU's database directive/database rights, because Norway is not part of the EU. However, since it is a part of the European Economic Area (yes -- this can all get confusing), it has apparently agreed to enact legislation that complies with certain EU Directives, including the Copyright and Database Directives. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
This week, our first place winner on the insightful side is Damien with a simple summary of the lack of logic behind link taxes: You know, I have yet to see a single source explain how a snippet tax is anything more than trying to charge people for talking about a news story and directing others who are interested to the original source. Apparently "on the internet" really does change everything. In second place, we've got a response from That Anonymous Coward to the FCC's aggressive demands for personal info from someone who made a pseudonymous FOIA request for information about Ajit Pai's Reese's Pieces mug: "In order to proceed with your request, please provide us with your name, your personal mailing address, and a phone number where you can be reached...." Aren't these the same assholes who had no problem with letters of support submitted in the name of the dead, people who submitted nothing, people who knew nothing about net neutrality, and people who opposed what the FCC was doing but someone used their names to give glowing copypasta support? Also can someone cite the part of the FOIA law that demands all of this information be turned over on demand?? Or is the FCC still making shit up as they go... For editor's choice on the insightful side, we've got ArkieGuy with a take on the link tax's cousin — the snippet tax — inspired by German publishers comparing quoting to stealing a pound of butter: The snippet tax is like wanting to charge cookbook publishers for recipes that call for butter. If you start charging people to recommend butter in their recipes, you won't sell as much butter. Next, we've got a comment from any moose cow word about the absurdity of the EU Copyright Directive's upload filtering requirements: There's no centralized database of copyright licensees, only copyright holders have access to those records. Yet, not even the largest copyright holders are able to verify which users were granted permission with the accuracy they demand be enshrined in law. How do they expect anyone else to do something only they have the capacity to do, and even they are incapable of doing? Over on the funny side, our first place winner is Ehud Gavron responding to the FCC's FOIA resistance with a bit of a low-blow that is hard not to giggle at: Yes, Ajit Pai has a stupid mug. He also has a funny coffee cup :) E In second place, we've got an anonymous response to a commenter who tried to portray our criticism of Google's recent patent attempts as further proof that we are Google shills: Mike: "Google should be shot in the head." You: "Look at Masnick wanting us to donate bullets to Google as if they don't have enough. Shill!" Also you: "Corporations are bad...unless they're intellectual property maximalists who have cheated actual artists and creators out of the fruits of their labors since the time of Queen Anne." For editor's choice on the funny side, we start out with a comment from tanj about a South Carolina drug task force serving regular warrants like no-knock warrants: They did knock. Once, with a battering ram. They did announced their presence, quite loudly. Finally, we've got a comment from Ninja confessing to an appropriate misreading of something in our post about the FCC's fake DDoS attack: "There's likely several more layers to this story" At first I read LAWYERS instead of layers. Which would be pretty accurate as well. That's all for this week, folks! Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Five Years Ago This week, instead of going through the usual look at what was happening five, ten and fifteen years ago, we're going to put all the focus on the events of this week in 2013. Why? Because it's the week that the revelations of NSA spying, which dropped last week, truly hit the fan. There was a whole lot of news about it, almost completely dominating Techdirt, and it's worth a closer look. As the leaks kept coming, it was revealed that the source was Edward Snowden, who described his ability to wiretap anyone from his desk. As politicians scrambled to defend the program, the DOJ was trying to cover up the secret court ruling about it, and we realized the big scandal wasn't that the NSA did something illegal, but that it probably didn't. Some defenders of the PRISM program tried to claim it helped stop an NYC subway bombing, but the evidence was lacking and even the Associated Press soon called bullshit. James Clapper was simultaneously claiming that the leaks were a danger to us all, and also no big deal, while the author of the Patriot Act stepped up to say NSA surveillance must end, and that the law was supposed to prevent data mining. It started becoming clear that the metadata story was the biggest one. Some politicians began speaking out, with Senator Rand Paul calling for a class-action lawsuit against the NSA, and Senator Ron Wyden calling for congressional hearings, before a group of Senators got together to introduce a bill to end the secrecy of the FISA courts. One Senator had previously predicted a lot of this, but unfortunately he got voted out of office in 2010. Meanwhile, a former NSA boss said the leaks show America can't keep secrets, even though they really showed the opposite. The public was divided in its opinion on the program, depending heavily on how the question was asked. And we pointed out that the leaks show the importance of Wikileaks and similar operations. The backlash grew, with Derek Khanna calling for James Clapper to be impeached for lying, a team of 86 companies and other groups called on Congress to end the spying, and the ACLU suing the government for 4th amendment violations. Various former NSA whistleblowers spoke up in defense of Snowden and against the agency's practices. Of course, there was also some pathetic backlash in the other direction, with Rep. Peter King calling for the prosecution of journalists who report on the leaks, and Congress moving to improve secrecy instead of fixing the problem. Then things began getting even worse, with the possibility emerging that the PRISM program enabled espionage against allies. A new leak at the end of the week revealed the NSA's talking points for defending itself, and sales of George Orwell's 1984 began to skyrocket, and... well, let's just say there's plenty more on the way in the coming weeks. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Crowdsourcing has obviously now been a thing for some time. Along internet timelines, in fact, crowdsourcing is now something close to a mature business practice and it's used for all manner of things, from the payment for goods created, to serving as a form of market research for new products and services, all the way up to and including getting fans involved in the creation and shaping of an end product. The video game industry was naturally an early adopter of this business model, given how well-suited the industry is to technological innovation. Here too we have seen a range of crowdsourcing efforts, from funding game creation through platforms like Kickstarter to empowering supporters to shape the development of the game. In that last example, it was Double Fine and Tim Schafer getting gamers involved in what would otherwise be the job of the creative team behind their game. The personalities here may matter greatly, because Ubisoft has recently unveiled an attempt to further get their fans involved in the game-creation process, yet many people are up in arms over it. Let's start with what Ubisoft is attempting with its anticipated next installment in the Beyond Good & Evil franchise. The long-awaited sequel to a 2003 Ubisoft game that was critically loved but flopped at retail, Beyond Good and Evil 2 will take place in an open universe full of strange creatures and cultures. During its E3 press conference, Ubisoft said that fans will be able to help populate that universe with their own music and artwork through a partnership with a company called HitRECord, with that company’s founder, actor-turned-entrepreneur Joseph Gordon-Levitt, appearing on stage. The HitRECord-powered Space Monkey Program allows fans to submit ideas and works into a series of musical and visual categories like “devotional music,” “anti-hybrid propaganda,” and “anti-establishment art.” Other fans can then comment on and remix those works, which will ultimately be evaluated by HitRECord and—if they fit the game well enough—sent along to Ubisoft. Everybody who’s contributed at all to an accepted work will be paid. If you're anything like me, your reaction to this was purely positive. Fans of Ubisoft titles and Beyond Good & Evil get to contribute to the game in a way they will recognize and be paid some amount of money for? How cool is that? Collaboration with fans on the creation of art is squarely in the realm of our CwF+RtB formula. To add some compensation to that makes this all the better. And, in my opinion, if this were anyone but Ubisoft doing this kind of thing, nobody would be pushing back on it at all. But because of Ubisoft's sketchy reputation, many are viewing this through purely cynical glasses and seeing nothing other than a company trying to avoid paying the full rate for the creation of its game. Almost immediately after Ubisoft’s conference, critics and developers started asking questions: Why not just pay full-time, salaried developers to do this work? What happens if fans’ work doesn’t get accepted? Do they not get paid? Did they do it all for nothing? Scott Benson, the co-creator of the indie game Night in the Woods and a vocal advocate for workers’ rights, pointed out that HitRECord’s business model seems to rely on what’s known as “spec work,” short for “speculation.” This is a common but nonetheless ethically muddy practice in creative and design fields. When you do work “on spec,” you’re producing something that a buyer might decide to pick up and then pay you for. Great, except this isn't being done in the "creative industry" at all, but rather directly with fans of the game franchise. Were Ubisoft trying to strong-arm artists for content it would otherwise pay for up front, then, yeah, this would suck. That's not what it's doing at all, though. Instead, the company is going directly to fans and asking them, rather than coercing them, to get involved in the project in a way those fans will find meaningful. Does this have the happy coincidence of being somewhat less costly? Sure. There's no denying that. But so what? If fans of a game are able to compete with the art created by the creative industry and want to do that type of thing under this platform, where exactly is the ethical dilemma? Were Benson to have his way, fans should be denied this opportunity because... why? Because someone else might not get paid? Where is the sense in that? There's also something to be said for HitRECord's meta-crowdsourcing experiment here and how interesting it will be to see if it can be pulled off. “At HR, people build on each other’s ideas, and our website (and community) keeps track of how projects evolve—and how ideas influence one another,” HitRECord executive producer Jared Geller said in an email, noting that the company has paid out a total of nearly $3 million since it was founded in 2010. “So any contribution that is included in any of the songs or visuals (guitar parts, vocal stems, etc) delivered to the Beyond Good and Evil 2 dev team will get credited and paid. If your contribution isn’t used, you don’t get paid.” So it's not just milking a fanbase for cheap labor, but allowing that fanbase to them play off of one another and build a community product, which will then be injected into the game and for which they will be paid. I mean, come on, if everyone could take their labor union hats off for just a second, they'd have to admit how cool an experiment this is. And, while HitRECord will have the ultimate decision-making authority on how compensation is divvied up between creators, it even takes feedback from multiple creators into account when making those decisions. The one area where there might be real concern is copyright infringement. There are other possible complications, as well, said a representative of NoSpec, an organization that advocates against the practice of spec work. “When people who participate in spec work know that the chance of payment is slim-to-none, it invites the fastest possible turnaround, and we’ve found that spec websites (those that sell design contest listings) are rife with plagiarism,” wrote the rep in an email. There is truth to this and Ubisoft and HitRECord have better have their shit in order if they don't want to turn this into some hellscape of accusations about plagiarism and copyright infringement. But if they can pull this off, the end result is going to be the injection of the voice of the fan directly into its game, which is about all we could hope for coming from a content producer. I'll end this with a thought experiment. Imagine for a moment if I had written this same post, except I did a find/replace for "Ubisoft" and replaced it with "Sole game creator." Does anyone really think the same level of outrage would exist? If not, then this isn't a moral question at all, but a monetary one. And if that's the case, it should go without saying that Ubisoft's reputation shouldn't prevent it from being able to try something good and cool with its fans. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Yet another Florida sheriff with a penchant for publicity is using his office (and manpower) to start some garbage viral War on Drugs. Hence, every bust made by his department -- utilizing armored vehicles and deputies that look like they shop at military surplus stores -- is splashed across the department's Facebook page. Fine, if that's what gets your blood flowing, but these scenes of busts, featuring the Sheriff front and center, contain claims that just aren't backed up by the actual paperwork. George Joseph of The Appeal has the details. The video finds Sheriff [Darryl] Daniels, who announces to the viewer that criminals must leave his county or face the consequences. The camera follows him to the house, briefly focusing on a broken window before Daniels opens the door. Standing in the raided home, Daniels takes a large swig of his morning cup of coffee and declares, “Fifteen going to jail, three big gulps.” Despite the sheriff’s announcement, the “raid” resulted in only five adult arrests and one juvenile arrest, according to Elaine Brown, a lead records specialist at the sheriff’s office. At best, maybe five will be going to jail. The sheriff depicts this as a raid on a "narcotics house" targeting opioids. The records obtained by The Appeal show no opioids were found during the raid. Four of the five adults were arrested for marijuana possession. The fifth was charged with MDMA and cocaine possession. But chances are those drugs might vanish along with the nonexistent opioids Sheriff Daniels proudly proclaims were taken out of circulation. Note the line about the field drug tests performed. These have already been proven bogus. A sheriff's office spokesman informed The Appeal that the 1.2 grams of heroin and fentanyl seized during the raid turned out not be opioids after being lab-tested. But the field tests told Sheriff Daniels everything he wanted to hear. The reliance on cheap, terrible drug field tests is part of Sheriff Daniels' drug-raiding tradition. Arrests and seizures sound great when you're dragging a camera through someone's house for a Facebook video, but when nothing holds up in court, you're left with an empty charade using citizens as clickbait. A former deputy contacted by The Appeal points out that cheap drug tests are just another tool for abusive police work. “The really good ones cost money, but those take away your probable cause,” he said, referring to arrests and police searches for which error-prone drug test field kits can provide legal pretext. “It’s probably the cheapest ones they could get to do the minimum standards for an investigation.” This same former deputy also pointed out the marijuana charges were trumped up. According to reports, 35 grams of marijuana were seized during the raid, but somehow two people are being charged with possession of more than 20 grams. Cheap tests, cheap vicarious thrills, and a whole lot of hype over drug charges that will likely dissipate into minimal punishment (if anything) once the lab tests arrive. That's how America's drug warriors roll. Sheriff Daniels rolls a little harder than most, but that's because tough-on-crime sheriffs are newscaster favorites. As The Appeal points out, Daniels has leveraged these videos to appear on national news networks and say ridiculous things like he's planning to treat all drug overdoses as homicides. This report points out some very unpleasant things about our war on drugs. Law enforcement officials may claim to recognize drug addiction as a sickness, but they're still far more interested in rounding up users than dealers. Faulty field drug tests allow officials to exaggerate their successes (and misrepresent the amount of dangerous drugs in the community), when not allowing them to perform searches they otherwise wouldn't have probable cause to perform. They're part permission slip, part unpaid PR rep. And this constant failure of field drug tests to accurately identify drugs gets ignored but local media, for the most part, isn't willing to follow up on high-profile drug raids to correct the record. And it keeps working because many Americans love the image of "tough on drugs" officers kicking in doors and waving guns around. But, far too often, "tough" just means dumb, brutish, and unconstitutional. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
The New York State Senate just keeps pitching unconstitutional law-balls over the plate, apparently assuming legislators' good intentions will overwhelm judges asked to determine just how much the new laws violate the First Amendment. The senate recently passed an anti-cyberbullying bill -- its fifth attempt to push this across the governor's desk. The law couldn't be bothered to cite which definition of "cyberbullying" it was using, but once the definition was uncovered, it became apparent the bill has zero chance of surviving a Constitutional challenge should it become law. Eugene Volokh's post on the bill passed along several examples of criminalized speech the bill would result in, including one with its finger directly on social media's pulse. An under-18-year-old high school student becomes a nationally known activist, for instance for gun control or transgender rights or some such. People repeatedly mock his arguments online, and condemn his as an idiot, which a prosecutor thinks is "verbal abuse" and "would reasonably be expected to cause ... emotional harm" to him. The people can be prosecuted, and will be convicted if the jury agrees with the prosecutor. The law makes this a Class A misdemeanor, which can be redeemed for a full year in jail if the prosecutor can get a judge to agree on handing out the maximum sentence. That law protects only minors from a variety of protected speech because everyone knows cyberbullying ends once victims turn 18. The new law that's looking to steamroll protected speech addresses the other side of this generational gap. Eric Turkewitz was again the first person to spot the bad bill, pointing out it would criminalize the posting of photos of grandparents to social media if the photo's subjects suffer from any form of incapacitation and have not given explicit permission for their photos to be posted publicly. His post takes on the First Amendment ramifications of the NY Senate's latest oblique assault on free speech. Elder Abuse Bill (S.409) that makes it a crime for caregivers (including family) to post photos on social media if elderly, vulnerable seniors aren’t able to give consent. [...] First off, while the First Amendment says that Congress “shall make no law…abridging the freedom of speech,” and the amendment applies to the states, there are still some very limited exceptions to it. But this just isn’t one of them. The First Amendment is no defense to conspiracy discussions about committing a crime, or defamation, or inciting imminent lawless action, or obscenity or copyright. I don’t see posting pictures of elderly Ma or Pa on that list. For this bill, if signed, to pass constitutional muster, the Supreme Court would have to create a wholly new category of restricted speech. Do you think they will do that? Or more importantly, did you even analyze that? My guess is no since this bill passed 61-0, and there are more than a few lawyers in the Senate. Here's what's being criminalized by this law: A PERSON IS GUILTY OF UNLAWFUL POSTING OF A VULNERABLE ELDERLY PERSON ON SOCIAL MEDIA WHEN, BEING A CAREGIVER WHILE PERFORMING THEIR DUTY OF CARE FOR A VULNERABLE ELDERLY PERSON, HE OR SHE POSTS AN IMAGE OR VIDEO OF SUCH PERSON ON SOCIAL MEDIA INCLUDING, BUT NOT LIMITED TO FACEBOOK, YOUTUBE, TWITTER, INSTAGRAM, SNAPCHAT, TUMBLR, FLICKR AND VINE, WITHOUT SUCH PERSON'S CONSENT. So, like the law says, if you act as a caretaker for an elderly person -- someone who might be your parent, grandparent, or close friend -- you can be charged with a misdemeanor for posting photos of them without their consent. "Vulnerable" in this bill simply means about the age of sixty and "suffering from a disease or infirmity" which prevents them from providing for their own health or personal care. That's a whole lot of gray area to cover with a vaguely-worded bill. As Turkewitz points out in his post, this would criminalize a wide swath of social media sharing simply because someone in the photo did not explicitly consent to publication. He also notes it does not simply criminalize sharing photos of elderly people in incapacitated states. It criminalizes the publication of any photos taken at any point in time. [L]et’s say that on Veteran’s Day you share a photo of your disabled WW II father for whom you sometimes care. He’s 20 years old in that long-ago-taken pic and in uniform. You are proud of his service as part of the Greatest Generation. Guilty of a misdemeanor. The bill's supporters will almost certainly claim they never intended the law to be read that way. But the best way to prevent laws from being read this way is to craft them carefully, rather than just toss word salad on the senate floor and hope for the best. But it's all cool with the senators who voted (again!) for an unconstitutional bill that criminalizes protected speech, because one time this bad thing happened. Recent media reports have highlighted occurrences of a caretaker taking unauthorized photographs or video recordings of a vulnerable elderly person, sometimes in compromised positions. The photographs are then posted on social media networks, or sent through multimedia messages. There's no better way to craft a bad law than typing something up quick to criminalize a thing you saw on Facebook. Jesus Christ. This is almost too stupid to be true. [Sobs into tattered copy of US Constitution.] You cannot use the First Amendment as a doormat just because some people are assholes. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
The internet is many things to many people. Some of these things are good, while others are bad. Still, it should be fairly uncontroversial to say that the internet has generally done a good job of empowering ordinary people. With the advent of a platform sans gatekeepers, millions of people suddenly had a voice that they would not otherwise have been afforded. The result of this has been the explosion in blogs, podcasts, forums, and other outlets. The internet brings the ability to reach others and that has resulted in an explosion of thought and speech. It will come as no surprise that plenty of national governments throughout the world aren't huge fans of their people suddenly having this sort of voice and reach. After all, that kind of free expression can often times come in the form of critiques of those very governments, and that kind of reach can create movements of dissent. You may recall back in April when Glyn Moody detailed Tanzania's attempt to tamp down this critical speech by forcing bloggers to register with the government at a cost greater than the average per capita income of its citizens. While this was a fairly naked attempt to keep the voices of its citizens from being heard, Glyn pointed out that the Tanzanian government was at least attempting to be cynically subtle about it. The current Tanzanian government is not very happy about this uncontrolled flow of information to the people. But instead of anything so crude as shutting down blogs directly, it has come up with a more subtle, but no less effective, approach. What a difference a few months make in the actions of an authoritarian regime. It seems this more subtle approach did not have the desired effect, as the Tanzanian government has now ordered that all unregistered bloggers simply shut themselves down or face criminal prosecution. Tanzania ordered all unregistered bloggers and online forums on Monday to suspend their websites immediately or face criminal prosecution, as critics accuse the government of tightening control of internet content. Several sites, including popular online discussion platform Jamiiforums, said on Monday they had temporarily shut down after the state-run Tanzania Communications Regulatory Authority (TCRA) warned it would take legal action against all unlicensed websites. Digital activists say the law is part of a crackdown on dissent and free speech by the government of President John Magufuli, who was elected in 2015. Government officials argue the new rules are aimed at tackling hate speech and other online crimes, including cyberbullying and pornography. If this all sounds familiar to you, it should, because actions like these were very much the precursors to the Arab Spring. These types of attempts to control the internet, a platform that is well-designed to route around this type of control, rarely work for exactly that reason. People will generally find a way if they are motivated enough, which is what makes trying to disappear dissent a government's first reaction so potentially disastrous. Critics of this move are predicting the demise of Tanzanian blogging. The Paris-based Reporters Without Borders group has said the new online content rules “will kill off Tanzania’s blogosphere”. Perhaps that's right. Or, perhaps, a move like this does more to spell the end of an authoritarian regime than the demise of a commonplace internet function that is ingrained into the human spirit. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
SESTA/FOSTA was pushed through with the fiction it would be used to target sex traffickers. This obviously was never its intent. It faced pushback from the DOJ and law enforcement agencies because pushing traffickers off mainstream sites would make it much more difficult to track them down. The law was really written for one reason: to take down Backpage and its owners, who had survived numerous similar attempts in the past. The DOJ managed to do this without SESTA, which was still waiting for presidential approval when the feds hits the site's principal executives with a 93-count indictment. The law is in force and all it's doing is hurting efforts to track down sex traffickers and harming sex workers whose protections were already minimal. Sex traffickers, however, don't appear to be bothered by the new law. But that's because the law wasn't written to target sex traffickers, as a top DOJ official made clear at a law enforcement conference on child exploitation. Acting Assistant Attorney General John P. Cronan's comments make it clear SESTA/FOSTA won't be used to dismantle criminal organizations and rescue victims of sex traffickers. It's there to give the government easy wins over websites while sex traffickers continue unmolested. In April, Backpage.com – the internet’s leading forum to advertise child prostitution – was seized and shut down, thanks to the collective action by CEOS and our federal and state partners. The Backpage website was a criminal haven where sex traffickers marketed their young victims. The Backpage takedown – and the contemporaneous arrests of individuals allegedly responsible for administering the site – struck a monumental blow against child sex traffickers. But other sites inevitably will seek to fill the void left by Backpage, and we must be vigilant in bringing those criminals to justice as well. With the recent passage of the SESTA-FOSTA legislation, state and local prosecutors are now positioned to more effectively prosecute criminals that host online sex trafficking markets that victimize our children. "Criminals" that "host sex trafficking markets." That's the target. That's any website that might be used by actual sex traffickers to engage in actual sex trafficking. There's no dedicated web service for sex trafficking -- at least not out in the open where Section 230 immunity used to matter. This is all about taking down websites for hosting any content perceived as sex trafficking-related. It wasn't enough to hang Backpage and its execs. The government will be scanning sites for this content and then targeting the website for content posted by third parties it seems mostly uninterested in pursuing. Hosts of third-party content are usually easy to find. The actual third parties are far more difficult to track down. Intermediary liability is back. Section 230 is no longer an effective defense. The edges have been trimmed back and the government knows it can rack up easy wins over web hosts and slowly start destroying the web under the facade of saving sex trafficking victims. The DOJ knew this law would make it harder to track down traffickers. But it also knows the law allows it to target websites instead. And here it is touting the law it fought against to a conference full of law enforcement officials, letting them know targeting websites will give them wins and accolades and far fewer headaches than tracking down the individuals actually engaged in illegal activity. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Everyone is looking for an answer, a solution, or a new approach to safeguard their organizations and their data. The Complete Microsoft 365 Security Training Bundle combines security training in Office 365, Windows 10, and Enterprise Mobility and Security (EMS), so you can learn how to provide enterprise-level services to organizations of all sizes. It's on sale for $49. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
Large ISPs like Comcast, Charter, Verizon and AT&T this week uniformly proclaimed that the death of net neutrality is going to be a really wonderful thing for American consumers. Charter Spectrum, for example, took to the company's policy blog to insist that the neutering of historically popular consumer protections on this front will somehow result in everybody getting better broadband. The ISP's argument, as it has throughout this entire little dog and pony show, focused on the repeatedly debunked claim that the FCC's pretty modest net neutrality rules demolished telecom industry investment: "Without the regulatory overhang of these rules however, businesses like ours will have the certainty they need to make infrastructure investments over the long-term, helping more people get online and enabling even faster broadband. This includes bringing high speed broadband to more hard to serve areas, including rural communities." Which is something you might be inclined to actually believe if Charter's own executives weren't on record publicly stating that the rules had absolutely no meaningful impact on Charter's bottom line: "Title II, it didn't really hurt us; it hasn't hurt us," Charter CEO Tom Rutledge said at an investor event in December 2016, according to a report by advocacy group Free Press. Publicly traded companies like Charter are required to give investors accurate financial information, including a description of risk factors involved in investing in the company." In fact, dozens of industry CEOs have publicly admitted to investors and media outlets that the whole "net neutrality hurt broadband investment" is completely baseless, something proven by anybody willing to spend five minutes with industry SEC filings and earnings reports. And while industry-hired economists tried to cherry pick very specific windows of investment to try and claim the exact opposite, the data here is pretty clear. In fact, an analysis by consumer groups (pdf) found that Charter's overall CAPEX actually went up in the wake of the FCC's 2015 net neutrality rule creation: "Charter’s capital investments went up 15 percent after the FCC’s Open Internet vote (when we include the pre-merger investments made by Charter, Time Warner Cable and Bright House Networks). And not only are Charter’s investments up, they’re 12 percent higher than the estimates Charter gave to investors prior to closing that merger. This being post-truth America however, facts don't appear to carry quite the same weight as they used to, and it's abundantly clear that some of the least-liked companies in America are confident in their belief that repetition forges reality. Apparently, said companies hope that if they repeat this nonsense often enough, people won't notice the government sold them, and the health of the internet, down river without a second thought (and in some instances, that's pretty clearly working). Granted Charter then gets to the real point of the company's blog post; to push for a new net neutrality law the company knows that in this political climate, they'll be the ones writing: ""Charter’s commitment to our customers is our top priority. We urge Congress to pass new legislation that preserves an open internet and ensures a regulatory framework made for the 21st century, so we can continue to improve and invest in our networks and provide more people access to a fast, reliable, and open internet." We've noted how ISPs are worried about losing the looming court case over net neutrality, as well as the dozens of states that are now imposing state-level net neutrality protections. As such, the hope is that they can push forth a loophole-filled net neutrality law in name only; one with so many loopholes as to effectively be useless, but which will pre-empt any tougher state or federal rules (including the restoration of the FCC's 2015 rules). It's a gambit that's not really working, in large part because these companies have obliterated any last vestiges of public trust they may have had with this latest lobbying assault. Permalink | Comments | Email This Story

Read More...
posted about 1 month ago on techdirt
The damning report the President has been waiting for has arrived. The Inspector General's report covering everything from James Comey's handling of the Clinton email investigation (terribly with bonus insubordination) to a couple of FBI agents forming a two-person #Resistance (stupid and made the FBI look bad, but not illegal) runs almost 600 pages and won't make anyone looking to pin blame solely on one side of the partisan divide very happy. It's been claimed the report would finally show the FBI to be an agency filled with partisan hacks, further solidifying "Deep State" conspiracy theories that the government Trump runs is out to destroy Trump. It was somehow going to accomplish this despite many people feeling the FBI's late October dive back into the Clinton email investigation handed the election to Trump. Whatever the case -- and whatever side of the political divide you cheer for -- the only entity that comes out of this looking terrible is the FBI. That the FBI would engage in questionable behavior shouldn't come as a surprise to anyone, but the anti-Trump "resistance" has taken Trump's attacks on the FBI as a reason to convert Comey, the FBI, and the DOJ into folk heroes of democracy. The summary of the report [PDF] runs 15 pages by itself and hands out enough damning bullet points to keep readers occupied for hours. Then there's the rest of the report, which provides the details and may take several days to fully parse. Here are some of the low lights from Inspector General Michael Horowitz, possibly the only person who should be touting "Deep State" theories since he's spent his IG career being dicked around by the DEA, DOJ, FBI, and DEA. The report says everything about the Clinton email investigation was unusual. Termed the "Midyear Exam" by the FBI, the investigation was mostly a voluntary affair. Most of the evidence and testimony obtained was obtained from consenting witnesses and participants. The FBI rarely felt the need to compel testimony or evidence with subpoenas. It also did not access the contents of multiple devices used by Clinton's senior aides, devices that may have contained classified info that had been circulated through a private email server. As the report notes, this is at odds with Comey's sudden interest in Anthony Weiner's laptop, where his estranged wife (and former Clinton personal assistant) Huma Abedin apparently had stored copies of Clinton emails. The IG says the tactics used were unusual but does not pass official judgment on them. However, the actions of five FBI employees involved in the investigation did further damage to the FBI and its reputation by taking an investigation already viewed as politically-questionable and aggravating the perception. In undertaking our analysis, our task was made significantly more difficult because of text and instant messages exchanged on FBI devices and systems by five FBI employees involved in the Midyear investigation. These messages reflected political opinions in support of former Secretary Clinton and against her then political opponent, Donald Trump. Some of these text messages and instant messages mixed political commentary with discussions about the Midyear investigation, and raised concerns that political bias may have impacted investigative decisions. However, the IG did not uncover evidence suggesting any of these FBI employees had the power to steer the investigation. Some of those engaged in anti-Trump texts actually pushed for additional subpoenas and search warrants in an investigation that seemingly had little use for any testimony not obtained voluntarily. But that doesn't mean these actions were harmless. Nonetheless, these messages cast a cloud over the FBI’s handling of the Midyear investigation and the investigation’s credibility. From there, it moves on to James Comey's surprising decision to go public with the email investigation's conclusions in July of 2016. This followed the softening of language in the FBI's investigative report. Clinton's handling of classified info went from "grossly negligent" to "extremely careless." The possibility of hostile actors accessing Clinton's email server went from "reasonably likely" to "possible." Then Comey decided to go public, cutting plenty of people out of the loop so they wouldn't prevent him from doing so. Comey acknowledged that he made a conscious decision not to tell Department leadership about his plans to make a separate statement because he was concerned that they would instruct him not to do it. He also acknowledged that he made this decision when he first conceived of the idea to do the statement, even as he continued to engage the Department in discussions about the “endgame” for the investigation. Comey admitted that he concealed his intentions from the Department until the morning of his press conference on July 5, and instructed his staff to do the same, to make it impracticable for Department leadership to prevent him from delivering his statement. We found that it was extraordinary and insubordinate for Comey to do so, and we found none of his reasons to be a persuasive basis for deviating from well-established Department policies in a way intentionally designed to avoid supervision by Department leadership over his actions. [...] We concluded that Comey’s unilateral announcement was inconsistent with Department policy and violated long-standing Department practice and protocol by, among other things, criticizing Clinton’s uncharged conduct. We also found that Comey usurped the authority of the Attorney General, and inadequately and incompletely described the legal position of Department prosecutors. The late October letter to Congress about the reopening of the investigation isn't viewed as any better by the OIG. Comey claimed he needed to do this because withholding the discovery of emails on Anthony Weiner's laptop might have been viewed as swinging the election in Clinton's favor. The IG disagrees. Much like with his July 5 announcement, we found that in making this decision, Comey engaged in ad hoc decisionmaking based on his personal views even if it meant rejecting longstanding Department policy or practice. We found unpersuasive Comey’s explanation as to why transparency was more important than Department policy and practice with regard to the reactivated Midyear investigation while, by contrast, Department policy and practice were more important to follow with regard to the Clinton Foundation and Russia investigations. Comey’s description of his choice as being between “two doors,” one labeled “speak” and one labeled “conceal,” was a false dichotomy. The two doors were actually labeled “follow policy/practice” and “depart from policy/practice.” Although we acknowledge that Comey faced a difficult situation with unattractive choices, in proceeding as he did, we concluded that Comey made a serious error of judgment. Then comes the irony. As Comey became the front-mouth for an investigation he shouldn't have been talking about, he routinely engaged in the same behavior he was currently investigating. We identified numerous instances in which Comey used a personal email account to conduct unclassified FBI business. We found that, given the absence of exigent circumstances and the frequency with which the use of personal email occurred, Comey’s use of a personal email account for unclassified FBI business to be inconsistent with Department policy. In addition to being a violation of FBI policy, James Comey -- currently idolized by some as a speaker of truth to power for being fired by the president -- also violated FOIA law by using a private email account for government communications. Comey wasn't the only one -- other agents involved in the investigation routinely used private email accounts -- but he was the FBI's personification of the Clinton email investigation. On top of this, he told other FBI agents the use of personal email accounts would subject them to harsh punishment. In an October 2016 speech at an FBI conference in San Diego, Comey said, "I have gotten emails from some employees about this, who said if I did what Hillary Clinton did I'd be in huge trouble. My response is you bet your ass you'd be in huge trouble. If you used a personal email, Gmail or if you [had] the capabilities to set up your own email domain, if you used an unclassified personal email system to do our business... you would be in huge trouble in the FBI." Some may quibble about the lack of classified info being circulated by these agents and their Gmail accounts, but the fact remains the use of private email accounts increases the risk of circulation exponentially. Sticking to government accounts reduces this possibility to zero. There's much more in the report, including some discussion about the propriety of the Russian influence investigation that Trump claims is a witch hunt. Nothing in the report suggests the investigation isn't valid, even if the actions of agents (the anti-Trump texting) and Andrew McCabe's non-recusal (his wife took money from a Clinton-connected PAC) managed to cover everything with a slimy gloss of impropriety. The upshot of the report is this: James Comey deserved to be fired, although probably not for the reasons Trump had in mind when he did it. The people employed by the FBI are not always able to set aside their personal biases when engaged in investigations. But the FBI is no one party's political tool. It's a blend of both sides, which makes it unlikely anything was done intentionally to harm Trump or Clinton's political prospects. For all the complaining done by Trump, he's the one in office. If the election was "thrown" by Comey's fourth quarter audible in the email investigation, Trump was the beneficiary of the FBI's actions. This makes complaints about a Russian investigation "witch hunt" incoherent, as it tries to retcon the FBI's actions to portray them as being #NeverTrump even when they were (not officially) helping him. The simultaneous investigations of Clinton and Trump make it difficult to craft a coherent conspiracy theory, but it certainly isn't stopping anyone from trying. The FBI is untrustworthy, but it's not a kingmaker. Permalink | Comments | Email This Story

Read More...