posted about 6 hours ago on techdirt
Five Years Ago This week in 2014, the world was dealing with the Heartbleed bug and turning its attention to the NSA's possible awareness of it — leading Obama to tell them to start revealing flaws but with no particular incentive to actually do so. It wasn't clear if the NSA had definitely known about and used Heartbleed, but there was nothing stopping them and people certainly weren't going to take their advice on dealing with it. Overall, the simple truth was that the government pays to undermine, not fix internet security. Meanwhile, the Guardian and the Washington Post won Pulitzers for their coverage of the Snowden leaks, which made a lot of folks angry including Rep. Peter King and CIA torture authorizer John Yoo. Ten Years Ago This week in 2009, the BSA was using the spate of stories about Somali pirates to talk about software piracy in a stunningly tonedeaf fashion, NBC was crafting its plans to make Olympic coverage worse and more expensive, the Associated Press was admitting its attack on aggregators looks stupid to the "untrained eye" while failing to explain why it shouldn't look stupid to everyone else too, and a hilarious but frightening warrant application got a college student's computer seized in part for using "a black screen with white font which he uses prompt commands on". DMCA abuse was chugging along as usual, with an activist group using it to hide exposure of its astroturfing and a news station using it to cover up video of it embarrassingly falling for an April Fool's story. And long before the Snowden revelations, not only were we already seeing revelations about the NSA's abuse of power, we were already unsurprised. Fifteen Years Ago This week in 2004, the internet was still beginning to embrace some of the innovations that define it today: location-based services were on the rise, with Google launching localized ads and mobile phone navigation systems threatening to oust expensive dedicated hardware (something also happening in other areas like event ticket handling), and more and more people were going online wirelessly in one way or another. Of course, along with this was the rise of some more problematic trends too, like patent hoarding houses and DRM. In California, the first two arrests were made under a new law banning all kinds of video cameras in movie theaters, while one state senator was seeking to completely ban Gmail (which was still new) for some reason — though at least the legislature shot down another ban on violent video game sales to minors. Permalink | Comments | Email This Story

Read More...
posted about 24 hours ago on techdirt
This year seems to be the year in which governments all over the globe really, really want to regulate the internet. And they're doing a ridiculously dumb job of it. We've talked a lot about the EU, with the Copyright Directive and now the Terrorist Content Regulation. And then there's Australia with its anti-encryption law and its "abhorrent content" law. India has already passed a few bad laws regarding the internet and is discussing a few more. Then there's the UK, Germany, South Korea, Singapore, Thailand, Cameroon, etc. etc. etc. You get the idea. Oh, and certainly, the US is considering some really bad ideas as well. When you look at what "problem" all of these laws are trying to solve, it can basically be boiled down to "people do bad things on the internet, and we need to regulate the internet because of it." This is problematic to me for a variety of reasons, in part because it seems to be regulating the wrong party. We should, ideally, be going after the people doing the bad things, rather than the tools and services they are using to do the bad things (or to merely promote the bad things they're doing). However, there is an argument -- not one that I wholly buy into -- that one reasonable way to regulate is to focus less on which party is actually doing the bad thing, and more on which party is best positioned to minimize the harm of the bad thing. And it's that theory of regulation (applied stupidly) that is behind much of the regulatory theory on the internet these days. Well, there's also a second theory behind many of the regulatory approaches, and it's "Google and Facebook are big and bad, so anything that punishes them is good regulation". This makes even less sense to me than the other approach, but it is certainly driving a lot of the thinking, at least in the EU (and possibly the US). Combine those two driving theories for regulating the internet and you've got a pretty big mess. They seem to be taking a sledge hammer to huge parts of the internet, rather than looking for narrow, targeted approaches. And, on top of that, in focusing so much on Google and Facebook, so many of these laws are written solely with those two platforms in mind, and with no thought to how it impacts every other internet company, many of which operate on a very different basis. Earlier this year, I wrote up my thoughts on what sort of regulatory approach would really "break up" big tech while preserving an open internet, but it's an approach that would require a very big shift in mindsets (one I'm still hoping will occur). However, Ben Thompson has taken a much more practical approach to thinking through regulating the internet. He, like me, is skeptical of most of these attempts to regulate the internet, but recognizing that it's absolutely going to happen no matter how skeptical we are, he is proposing a framework for thinking about regulating the internet, in a way that would (hopefully) minimize the worst outcomes from the approaches being used today. You should read the whole thing to understand the thinking, the background, and the approach, but the key aspects to Thompson's framework are to recognize that there are different kinds of internet companies -- and that's true not just up and down the stack, but across the different kinds of services. So his hope is that if the regulatory approaches were more narrowly targeted to a manner in which they fit better we'd have a lot less collateral damage in trying to shove a square regulatory approach through a round internet service. Another key to his approach is a more modern update to the common "free as in speech v. free as in beer" concept that everyone in the open source world is familiar with. Ben talks about a third option that has been discussed for decades, which is "free as in puppy" -- meaning something that you get for free, but which then has an ongoing cost in terms of maintaining the free thing you got. Most in the West agree, at least in theory, with the idea that the Internet should preserve “free as in speech”; China in particular represents a cautionary tale as to how technology can be leveraged in the opposite direction. The question that should be asked, though, is if preserving “free as in speech” should also mean preserving “free as in beer.” Specifically, Facebook and YouTube offer “free as in speech” in conjunction with “free as in beer”: content can be created and proliferated without any responsibility, including cost. Might it be better if content that society deemed problematic were still “free as in speech”, but also “free as in puppy” — that is, with costs to the supplier that aligned with the costs to society? With that premise, he suggests a way to better target any potential platform regulation: In theory, this lets various countries who believe there are certain problems on the internet more narrowly target their regulations without harming other parts of the internet: This distinct categorization is critical to developing regulation that actually addresses problems without adverse side effects. Australia, for example, has no need to be concerned about shared hosting sites, but rather Facebook and YouTube; similarly, Europe wants to rein in tech giants without — and I will give the E.U. the benefit of the doubt here — burdening small online businesses with massive amounts of red tape. And, from a theoretical perspective, the appropriate place for regulation is where there is market failure; constraining the application to that failure is what is so difficult. Please don't comment on this without first reading Ben's entire piece, as it gets into a lot more detail. He very readily admits that this doesn't answer all the questions (and, indeed, likely creates a bunch of new ones). I will admit that I'm not convinced by this model, but I do appreciate that it's given me a lot to think about. At the very least, in targeting just the ad-supported platforms for regulation solves two problems: (1) the mis-aligned incentives of ad-supported platforms to consider the wider societal impact of the platform, and (2) the sledge-hammer approach to regulating all internet platforms, no matter what type and where in the internet stack they reside, by more narrowly focusing it just at the application level and just at a particular type of service. And, frankly, this kind of approach could potentially move us towards that world of "protocols, not platforms" that I envision (a more regulated ad-supported platform world might push companies to explore non-advertising based business models). I still have lots of concerns, however, for all of the complaints about what Google and Facebook have done with an ad supported model, we should be willing to admit that an ad supported model has created some incredibly powerful services that have really done amazing things for many, many people. Everyone focuses on the negatives -- which exist -- but we shouldn't ignore how much of the good stuff we've gotten because of an internet built on the back of advertising. Can it be improved? Absolutely. But targeting internet advertising as "the problem" still feels too broad to me (and, in fact, I think Ben would likely agree on that point). If there must be a regulatory approach, it should not be targeted just by the nature of the platform, but around the specific and articulated harm that it is trying to solve. At least that way, we can weigh the harms such a law might mitigate, against the good aspects it might hinder, and then be better able to judge whether or not the regulatory approach makes sense. I'm still skeptical that most plans to regulate the internet will do a very good job of narrowly targeting actual harms (and to do so without throwing away lots of good stuff), but since we're going to be having lots of discussions around these regulations in the coming weeks, months, and years, we might as well start having the discussion of how we should view and analyze these proposed laws. And, on that front, Ben's contribution is a useful way of thinking about these things. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
In 2017, FCC head Ajit Pai came under fire for filling a new "Broadband Deployment Advisory Council" (BDAC) task force with oodles of industry representatives, but few if any consumer representatives or local town or city officials. Not too surprisingly the panel saw a significant amount of controversy, several protest resignations, and the arrest of a one-time panel chair for fraud, but the panel itself never actually accomplished much of anything to address the problem it was created for. Fast forward to last week, and the FCC has once again found itself under fire for appointing a member of the The American Legislative Exchange Council (ALEC) to the agency's "consumer advisory" panel: "A committee that advises the Federal Communications Commission on consumer-related matters now includes a representative of the American Legislative Exchange Council (ALEC), which lobbies against municipal broadband, net neutrality, and other consumer protection measures. FCC Chairman Ajit Pai announced his Consumer Advisory Committee's new makeup on Wednesday. One new member is Jonathon Hauenschild, director of ALEC's Task Force on Communications and Technology. He and other Consumer Advisory Committee will serve two-year terms. The most obvious problem is that ALEC is directly employed by the telecom sector to undermine and eliminate consumer protections. ALEC played a starring role in helping the broadband industry pass blatantly-protectionist bills in more than 21 states that hamstrung or simply banned towns or cities from building their own networks, even in instances when private industry refuses to. It has also bandied about cease and desist warnings against critics who've pointed this out. Both ALEC and Hauenschild have lobbied against net neutrality protections that continue to have the overwhelming bipartisan support of the public. You'd be hard pressed to find a single actual consumer advocate who'd agree with ALEC's positions on these issues. While Hauenschild likely holds some personally divergent opinions from his employer, there's very little in his background or time at ALEC that would qualify him as expert on consumer telecom issues. Certainly nothing that would somehow position him above a universe of objective experts or academics who've actually worked to protect consumer welfare. And while Pai appointing a like-minded ally to an FCC panel isn't surprising, involving ALEC also raised a few eyebrows given that even AT&T and Verizon have recently backed away from the organization due to its recent hosting of a bigoted, far-right extremist: "ALEC has long received financial support from the telecom industry. But Verizon left ALEC in September 2018 after it hosted a speech by right-wing activist David Horowitz, in which Horowitz argued against the legalization of abortion and gay marriage, compared the left wing's support of "redistribution of income" to slavery, and said that "at the K-12 level, school curricula have been turned over to racist organizations like Black Lives Matter, and terrorist organizations like the Muslim Brotherhood." Verizon explained to The Intercept that it "has no tolerance for racist, white supremacist, or sexist comment[s] or ideals." AT&T subsequently ended its membership in ALEC, also citing the Horowitz speech." While ALEC certainly has expertise in consumer protection, it comes in the form of trying to prevent it from happening. Again, Pai surrounding himself with like-minded allies isn't surprising. But appointing an ALEC rep to a consumer issue advisory panel is kind of like inviting a hungry shark to your swimming safety seminar: there's certainly experience there, just not of a variety you're going to find useful. And certainly not helpful when it comes to fixing the universe of problems consumers face in a telecom sector dominated by wealthy and well-connected natural monopolies. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Another one of 1-800-LAW-FIRM's lawsuits has been tossed for a second time. After being shut down at the district level for attempting to hold social media companies responsible for the Pulse nightclub shooting in Orlando, Florida, the law firm asked the Sixth Circuit Court of Appeals to take another look at its dubious legal theories. The Appeals Court has taken another look and it doesn't like what it sees any more than the district court did. The violent act committed inside the nightclub was horrible, but the court cannot provide a remedy for every wrong -- especially not in a case where the plaintiffs are trying to hold a third party responsible for violent acts they neither encouraged nor committed. Social media platforms may make it easier for terrorists to spread their message, but that does not add up to material support for terrorism. That's the legal theory 1-800-LAW-FIRM and Excolo Law have been using to push these lawsuits in order to dodge the obvious Section 230 implications. It has yet to find support in any court. It doesn't find any here either. From the decision [PDF]: We sympathize with Plaintiffs—they suffered through one of the worst terrorist attacks in American history. “But not everything is redressable in a court.” Kemper v. Deutsche Bank AG, 911 F.3d 383, 386 (7th Cir. 2018). And terrorist attacks present unique difficulties for those injured because the terrorists “directly responsible may be beyond the reach of the court.” Id. This is one such case. But the absence of Mateen and the inability to hold ISIS responsible cannot create liability elsewhere. Plaintiffs’ complaint includes no allegations that Twitter, Facebook, or Google had any direct connection to Mateen or his heinous act. And Plaintiffs do not suggest that Defendants provided “material support” to Mateen. Without these connections, Plaintiffs cannot state a viable claim under the ATA. As a result, we affirm the district court’s dismissal of Plaintiffs’ claims. The Appeals Court also agrees with the lower court's finding that the nightclub shooting had almost nothing to do with the international terrorism the plaintiffs claim Twitter and others are helping support. The shooter was "self-radicalized" and nothing in the plaintiffs' 51-page complaint is able to conclusively tie a domestic shooting by a US citizen to ISIS or its online recruitment efforts. The plaintiffs want the court to apply a completely ridiculous "proximate cause" standard that has never been applied before and will never be applied in the future. There's no legal basis for it and it would pretty much allow almost anyone to sue almost anyone else for almost anything. With the “highly interconnected” nature of social media, the internet, and “modern economic and social life”—we expect Defendants’ websites to cause some “ripples of harm” that would “flow far beyond the defendant’s misconduct.” Fields, 881 F.3d at 749. But without more, Defendants do not proximately cause all these potential ripples. The content did not compel Mateen’s actions. Indeed, if we accepted Plaintiffs’ argument, Defendants would become liable for seemingly endless acts of modern violence simply because the individual viewed relevant social media content before deciding to commit the violence. With nothing to hang on Twitter, there's nothing left of this lawsuit. The state law claims follow the federal claims into a dismissal with prejudice. 1-800-LAW-FIRM wants another chance to amend its lawsuit but the Appeals Court says it should have tried that earlier at the district level. This lawsuit is dead, just like so many others filed by this law firm. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
A quick followup to yesterday's post about officials in Peachtree, Georgia looking to pass a resolution that would allow city officials to spend taxpayer money to sue their own critics for defamation. There were all sorts of problems with this... and it appears the taxpayers weren't happy. At the city council meeting last night, lots of those taxpayers made it clear this was a bad idea: People lined up to push back against the resolution.... “You get to decide whether you’ve been defamed or not and you want to use our money taxpayer money to sue us, we might impoverish us,” said another Peachtree City resident. It sounds like nearly everyone who spoke out was against the proposal, leading it to be voted down unanimously, though the mayor, Vanessa Fleisch, had an odd bit of commentary on the whole thing: “I think it’s the right outcome I work for the citizens, the intent was very pure but it wasn’t written correctly I’ve been told and so the citizens have spoken and we move on,” said Mayor Fleisch. The intent is never pure when the goal is for public officials to sue critics. And, the problem was not that it wasn't written correctly. The problem was with the whole idea. Hopefully, this doesn't mean there's a plan to "rewrite" this proposal. Just leave it be and maybe get a somewhat thicker skin if you're going to work for the government. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
I know that some will argue that "every week" is a bad week for Facebook with regards to privacy, but this week in particular is looking especially awful, with (last I checked!) three "big" stories regarding the company's bad decisions and handling regarding data. Of course, because this is Facebook, I still think the reporting is getting the story a bit wrong. The story that has gotten the most attention is the least concerning, while the ones getting less attention are the real problems. First up is the NBC News story going through a big pile of leaked internal documents from its ongoing lawsuit with app developer Six4Three. If you don't recall, the company, which made a skeezy app to let you find pictures of other people on Facebook wearing bikinis, got mad and sued Facebook when Facebook (finally) realized that maybe it shouldn't give app developers access to so much data, and cut them all off (effectively killing Six4Three's entire ability to operate). Many people reacted to this week's story as if it was some big reveal that Facebook cut favorable data deals with some partners, and that it toyed around with business models selling access to data, but frankly, I don't see all that much that's different from the cache of documents that was released back in December. As I said then, most of the stuff that people are freaking out about appears to be taken out of context. Facebook investigating different business models isn't inherently bad. And many people are framing those discussions completely outside of the context of what Facebook was actually doing at the time or how people viewed the data it had access to. A lot of focus is on the fact that Facebook put a dollar value on the data -- but that doesn't actually mean (as many are suggesting) that it ever planned to "sell the data." It did look at charging app developers to access the data, but that's not a particularly crazy idea -- and one that lots of people discussed at the time, and one that plenty of companies with lots of data use. There are, certainly, reasonable concerns to be raised about Facebook looking to deliberately undermine competitive services via its platform -- and that was the part that most concerned me back in December as possible antitrust violations. But, there doesn't really appear to be that much new on that front. Facebook looks sketchy, but when hasn't it looked sketchy? And, because some will erroneously call me a Facebook shill, let's look at the other two privacy blunders this week because there's nothing redeeming about either of them. Both are straight up awful. They're the kinds of security mistakes that tiny startups with no real understanding of security make. Not something that a company like Facebook should ever make. If you want to be concerned about Facebook and privacy, focus on these two stories that suggest not so much a cavalier attitude towards privacy as an incompetent implementation of basic security practices. First up, Business Insider revealed that Facebook was asking users for their email password and then sucking up all your contacts without asking for permission. While you might wonder what idiot would hand Facebook his or her email password for no obvious reason (a valid question) that doesn't absolve Facebook from even asking. After pressing Facebook on this, the company admitted that it sucked up the email contacts of 1.5 million users this way, and that it's now deleting it. Since May 2016, the social-networking company has collected the contact lists of 1.5 million users new to the social network, Business Insider can reveal. The Silicon Valley company said the contact data was "unintentionally uploaded to Facebook," and it is now deleting them. The revelation comes after pseudononymous security researcher e-sushi noticed that Facebook was asking some users to enter their email passwords when they signed up for new accounts to verify their identities, a move widely condemned by security experts. Business Insider then discovered that if you entered your email password, a message popped up saying it was "importing" your contacts without asking for permission first. This is a very bad security practice, and certainly could lead to legal issues for Facebook. Sucking up that kind of data without permission is super bad. Facebook's excuse here is not good either: A Facebook spokesperson said before May 2016, it offered an option to verify a user's account using their email password and voluntarily upload their contacts at the same time. However, they said, the company changed the feature, and the text informing users that their contacts would be uploaded was deleted — but the underlying functionality was not. How does someone not catch that? How does someone not catch that asking for your (non-Facebook!) email account is just a bad idea in general? This reflects extremely poorly on Facebook's security review process. The second story may be even worse. TechCrunch has the story that Facebook is now admitting that the really bad screwup first reported last month, concerning the company "accidentally" storing plaintext passwords of some Instagram users, actually impacted millions of users, rather than just a few thousand as originally reported. This of course, goes back to the general law of security breaches that we've discussed for over a decade: it's always worse than originally reported. It's difficult to think of a big security breach where the number of impacted people wasn't updated upwards at a later date. As we noted last month, what caused this was legitimately a bug, rather than nefarious intent, but for a company of Facebook's size, and with the security talent it has on staff, this is the kind of bug that is unacceptable -- especially with something such as protecting passwords (an area of security that is very well developed). I guess these are just more things to add to the neverending Facebook apology tour. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
A leading book summary service for entrepreneurs, executives, and business coaches, Readitfor.me condenses the most important books into twelve-minute summaries that will keep you up to date on the most important trends in the business world. You'll get summaries of best sellers, classic reads, and books that will help you solve specific problems like productivity, tough conversations, management, and more. The one year subscription is on sale for $29. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Back in 2011 DOJ regulators blocked AT&T from acquiring T-Mobile, arguing that that the deal would have harmed consumers and resulted in higher rates by eliminating one of just four major wireless players. That's a pretty easy argument to make, given every time a country allows four wireless carriers to morph into three, the sector gets less competitive and prices go up (see: Ireland or Canada). Blocking the deal wound up being a good thing: T-Mobile went on to be even more disruptive, and has helped introduce a number of consumer-friendly market shifts like cheaper international roaming and the death of long-term contracts. In 2014 T-Mobile and Sprint tried to merge, and regulators (quite correctly) pointed out the deal wouldn't be good for consumers or the market, and blocked it from happening. Fast forward to 2019, and T-Mobile is once again trying to merge with Sprint, hoping to take advantage of the Trump era to finally overcome regulatory scrutiny. Both companies have been telling everyone who'll listen that reducing the total number of competitors will somehow boost competition in the sector. Not too surprisingly, even in this era of blind telecom sector fealty, regulators are having a hard time swallowing this particular pill. Both Reuters and the Wall Street Journal cite DOJ insiders who say that the DOJ isn't likely to approve the deal in its current form: "The U.S. Justice Department has told T-Mobile US Inc and Sprint Corp it has concerns about their proposed $26 billion merger in its current structure, sources familiar with the matter said on Tuesday, although no final decision has been made. T-Mobile CEO John Legere was quick to insist the "premise" of the report was "simply untrue": The premise of this story, as summarized in the first paragraph, is simply untrue. Out of respect for the process, we have no further comment. This continues to be our policy since we announced our merger last year. https://t.co/3q9CVgkRfv key info: https://t.co/N5YvuuJtPZ — John Legere (@JohnLegere) April 16, 2019 The CEO didn't get into which aspect of the story he took issue with, but it's not just the DOJ that has expressed skepticism of the deal's benefits. A growing array of state regulators and attorneys general have also started kicking back against the deal, expressing concern that more consolidation is probably the very last thing the already arguably broken US telecom sector needs right now. While T-Mobile claims merging with Sprint will create a bigger, better third competitor, that's simply not how telecom works historically. Fewer competitors equals less competition, and higher prices. History suggests that's not really debatable. Wall Street analyst Craig Moffett tells me his research firm dropped the chance of merger approval from around 50% to 33% given the growing state-level opposition, public disdain, and the growing call for more meaningful US antitrust enforcement in the way of the DOJ's bungled AT&T Time Warner merger kerfuffle. Granted the deal could still be approved. Pai's FCC is likely to rubber stamp the deal without much thought. And the DOJ could approve the deal given the right combination of conditions. Given the vast army of revolving door lobbyists T-Mobile has under its belt (ranging from Trump ally Corey Lewandowski to former FCC Commissioners like Mignon Clyburn and Robert McDowell), there's a lot of firepower aimed at getting this deal done, regardless of how terrible history says it's going to be for consumers, competition, and a healthy wireless market. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
The First Amendment is getting no help from the nation's highest court. Yet again, the Supreme Court is declining an opportunity to answer a very important question about free speech: where is the dividing line between threats and violent -- but protected -- speech? The Supreme Court already punted on this issue in 2015 with the Elonis v. United States case. In that case, Anthony Elonis posted a bunch of nasty stuff online about his ex-wife. He ended up being jailed for these, with the court finding his posts -- which he claimed were merely him venting in the form of ultraviolent rap lyrics -- constituted threats. His appeal went all the way to the top but the Supreme Court had nothing for him. It did overturn his conviction, but it left the First Amendment question unanswered. The Supreme Court said the trial court adhered to the wrong negligence standard -- one that said Elonis should have known his posts were threatening if any "reasonable person" would find them threatening. The correct standard to use was mens rea, meaning the government needed to prove Elonis knew his posts were illegal (i.e., that they were "genuine threats") when he posted them. As for the First Amendment, the Supreme Court seemed happy to avoid this issue completely. Having decided the wrong standard was used by the trial court, the Supreme Court declared it did not need to hand down an opinion on the First Amendment implications, leading to the mess we're in now, with lower courts drawing disparate conclusions about the line between threats and protected speech. The mess will continue. Pittsburgh rap artist Jamal Knox was jailed for the lyrics of his song "Fuck the Police." An obvious tribute to the 1988 N.W.A. track, Knox's song included the names of two officers that had previously arrested him and some very descriptive violent acts involving them. Knox and Beasley's song, posted on Facebook and YouTube, included the names of the two Pittsburgh officers who arrested them with lyrics like, "I'ma jam this rusty knife all in his guts and chop his feet" and "Well your shift over at three and I'm gonna f*** up where you sleep." The song ended, "Let's kill these cops cuz they don't do us no good." The officers testified that the lyrics made them "nervous" and concerned for their safety, with one saying it led him to leave the police force. On the basis of the cops' subjective response to the song's lyrics, Knox was sent to prison for two years. (His sentence also included drug and gun charges.) Knox argued his lyrics were part of his rap persona and that he was not trying to threaten the officers, much less try to bring his violent lyrics to life. The state supreme court upheld the conviction, apparently because the justices had never heard a rap song in their lives. "...The rap song here is of a different nature and quality," the court's chief justice wrote in the majority opinion. "They do not include political, social, or academic commentary, nor are they facially satirical or ironic. Rather, they primarily portray violence toward the police," the opinion read. This rationale was rebutted in a masterful understatement in the rapper's brief to the Supreme Court: The rappers, in their brief filed Wednesday, said that the opinion "reveals a court deeply unaware of popular music generally and rap music specifically." We'll never find out whether the SCOTUS justices are a bit more up on today's urban music, unfortunately. The Supreme Court declined Monday to take up the case of rapper Jamal Knox, who argued he was sent to prison for a song that was protected by the First Amendment. By avoiding the issue for now, the justices left for another day a look at the contours of so called "true threats" -- speech that falls outside the protections of the First Amendment. That's a shame. Thanks to its disinterest, we're just going to have to throw the greatest rap collaboration track ever released in the trash. The amicus brief sent to the unreceptive court was penned by some lawyers and legal scholars. Oh, and these guys: Additional amici include musical artists Chancelor Bennett (“Chance the Rapper”), Robert Rihmeek Williams (“Meek Mill”), Mario Mims (“Yo Gotti”), Joseph Antonio Cartagena (“Fat Joe”), Donnie Lewis (“Mad Skillz”), Shéyaa Bin AbrahamJoseph (“21 Savage”), Jasiri Oronde Smith (“Jasiri X”), David Styles (“Styles P”), Simon Tam (member of The Slants and petitioner in Matal v. Tam, 137 S. Ct. 1744 (2017)), and Luther R. Campbell (member of 2 Live Crew and petitioner in Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994)), as well as music industry representatives Alan Light (former Editorin-Chief, Vibe and Spin magazines), Dina LaPolt, Patrick Corcoran, Peter Lewit, and the entertainment company Roc Nation, LLC. I guess the First Amendment will have to wait for another test case the Supreme Court can't wait to bypass. We need to have this question answered. Rap music -- and those inspired by it -- is something that just isn't going to go away.. Until SCOTUS finally decides it's going to answer some difficult questions, all we really have left is this GIF: Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Undaunted by the fact that internet filters never actually seem to work, the UK continues its quest to censor the internet of all of its naughty bits. The UK has long implemented porn filters in a bid to restrict anybody under the age of 18 from accessing such content. New age verification controls were also mandated as part of the Digital Economy Act of 2017. But as we've previously noted, the UK government has seen several fits and starts with its proposal as it desperately tries to convince the public and business sectors that the ham-fisted effort is going to actually work. This week the country formally announced that its filter proposal officially now has a start date: July 15. According to the UK government, websites that fail to comply with the country's age verification program face fines up to £250,000, risk being taken offline, or may lose access to payment services: "...commercial providers of online pornography will be required by law to carry out robust age-verification checks on users, to ensure that they are 18 or over. The move is backed by 88% of UK parents with children aged 7-17, who agree there should be robust age-verification controls in place to stop children seeing pornography online. Websites that fail to implement age-verification technology face having payment services withdrawn or being blocked for UK users." In short, starting in July, should you want to view some porn, you'll be redirected to a special subsite where you'll be prompted for an email address and a password, before verifying your age using a driving license or a passport. There's a few exceptions, including websites that aren't selling access to porn and those that are simply engaging in "artistic" pursuits. Expecting the UK government to figure all of this out on the run should, at the very least, provide some entertainment value. While this might make some people feel good, there's still little hard data to suggest any of this is going to work, and more than a few hints it's actually going to cause problems. The obvious risk of this data leaking out and being used nefariously is one concern. The other major problem is there are simply too many porn websites to effectively police, and the belief the UK government can police them all is arguably laughable. Meanwhile all it takes to avoid the restrictions is the use of a VPN or proxy to trick the website in question to think that you're coming from another country. Others note that the ban is likely to just drive many users looking for porn toward notably more seedy venues and workarounds: "When you hire a bouncer to crack down on kids drinking in the local pub, you don’t get a sudden rise in homework. You get a surge in fake IDs and drinking in the park. The porn block will do the same thing online, pushing kids towards streaming sites stuffed with malware, creepy subreddits, and places on the dark web that sell credit cards details – because it seems as if this age verification system is going to use credit cards as its basis. It’s a classic case of driving legal behaviour underground, making it a whole lot dodgier than it was in the first place." Meanwhile there's little data supporting the idea that porn filters in general even work, and plenty of data suggesting such filters routinely cause collateral damage. A joint-report published this week by digital rights advocates Open Rights Group (ORG) and Top10VPN VPN review portal noted the UK government already filters 760,000 websites with notable inaccuracy, leading to the routine inadvertent censorship of legitimate websites. All in all the UK's war on porn is a puritanical feel good measure that's going to cause far more problems than it actually fixes. And in a few years it's likely the UK will either retreat from the measure after it gets tired of playing a futile game of naughty-bit Whac-a-Mole, or will double down on the efforts while pretending the entire affair actually worked. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
We've not been shy about pointing out that the recent practice by famous athletes of trademarking their nicknames all seems somewhat silly. The whole thing smacks of some combination of a money-grab over terms often not coined by the athletes themselves, and the kind of protectionism by the famous that is just all the rage these days. A recent incidence of this concerning the trademark application for Luka Doncic's nickname carried with it a twist, however, in that the applicant was not by Doncic himself, but by the Dallas Mavericks, the team for which he plays. The thrust of our post on the matter was roughly: well, that seems kind of shitty. After all, NBA players tend not to play for the same teams forever, though it's worth pointing out that the Mavericks pulled this off with Dirk Nowitzki, so there's that. Still, should Doncic move to another team, what happens to that trademark on his nickname? Mark Cuban appeared to show up in the comments. Typical techdirt. Don't ask why. Dont do any research. Just pretend they know something We have been grabbing player urls and trademarking and copyrighting player related terms for years. It's to protect players. You know what's worst than your article ? Some scammer trademarking or copyrighting a nickname or slogan they read about online All of our players have the right to use them, but never do. They just appreciate that we are looking out for them. After I was done yawning at the "Typical techdirt" part of the comment, it took me roughly thirty seconds to think up a far more player-friendly option: the team could simply educate its players on how to trademark their own nicknames if they so choose, rather than attempting to trademark them itself. After all, the team is just looking out for the players, right? My research into our own comments tells me that the Mavs do, at least. It would probably also help protect the players if the team was successful at the trademark process, something that was most certainly not the case this go around. The United States Patent and Trademark Office denied the Dallas Mavericks’ trademark applications for two of Luka Doncic’s nicknames. The Mavericks sought to acquire the rights THE MATADOR and EL MATADOR, last December. Doncic picked up the nickname of Matador while playing for Real Madrid in Spain before joining the NBA . According to trademark lawyer Josh Gerben, who has been following the application process, the refusal isn’t a surprise. In a video posted to Twitter, he states that the reason that the applications were rejected was because the USPTO found 20 other preexisting Matador trademarks that it views as too similar to Doncic’s marks. Gerben went on to note that the team may have made its application harder to approve by packing as many market designations into it as it possibly could. This is somewhat common, but for a mark that is already approved for other markets, this broad shotgun-based approach doesn't win you any points with the USPTO. It's also the case that protecting players is not a market onto itself, meaning that the Mavericks would have had to show a real intent to use the marks in the markets requested. Perhaps a failure to do so also counted against the application. I'll give Cuban credit where it's due, however, because often times when he's quoted I find myself falling just a little bit more in love with him. For now at least, it doesn't look like Doncic will have to worry about the Mavericks acquiring the rights to his Matador nickname. The team won't be challenging the USPTO’s ruling. When contacted, Cuban was blunt about the ruling. “Shit happens,” Cuban said via email. “Moving on.” Never change, Mark. Well, maybe just a little... Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
Recently, the Greene County (TN) Sheriff's Department spent the day being owned on Twitter. It wasn't necessarily the sheriff's fault. The Tennessee Dept. of General Services decided to show off the Sheriff's armored vehicle, obtained via the Defense Department's 1033 program. This program allows agencies like the GCSD to obtain military equipment so they can ensure the safety of [checks census figures] the 68,000 residents of Greene County. This is the tweet from the Department of General Services that became the landmine under the Sheriff's Department's MRAP's wheels: We're thrilled that our LESO program, in our Vehicle and Asset Management division, was able to supply @GreeneSheriff with this mine-resistant ambush protected vehicle (MRAP) for the agency’s use. 📷: @GreeneSun pic.twitter.com/NHKOTpcZIn — TN Department of General Services (@TennDGS) April 11, 2019 This gaudy ratio-ing of the GSC tweet -- filled with a long list of responses ridiculing the Sheriff's Department for its war machine -- led to the Sheriff himself defending the acquisition to local journalists. This went far worse than anyone probably expected. I don't know what I was expecting, but it certainly wasn't the cognitive dissonance on display here. Sheriff Wesley Holt first says the MRAP is for the children. Greene County Sheriff Wesley Holt said the MRAP has so far been used "primarily to show the kids" and not for any other purpose. This is attempt to get residents to view it as the equivalent of a monster truck: big, impressive, but mainly just an oversized toy with zero war machine implications. Then Holt says, actually, it's kind of a war machine, but mainly something that protects officers, rather than assaults citizens. According to its application submitted to General Services, the sheriff's department intended to use the MRAP for SWAT response, including for barricaded suspects, during active shootings and for natural disasters. Holt pointed to a police shooting Sunday that left two Greeneville Police Department officers injured after exchanging fire with a suspect inside of an apartment. "We could’ve took this armored vehicle over there and pulled right up to the front door and kept our officers safe inside that armored vehicle," Holt said. This makes more sense. An MRAP definitely provides defensive cover for officers responding to dangerous situations, but still probably overkill in a county like Greene's. This is a little better than the "toy to show kids" argument. It's too bad the Sheriff's Department didn't have the MRAP before the recent shooting— [D]espite the state agency showing off the MRAP this week, Holt said the department received it a couple years ago. "We've had that thing for a while," Holt said. "What we finally did was had it striped." W. T. F. Sheriff: "We can use this vehicle in dangerous situations just like the dangerous situation we didn't use the vehicle in." A police department from the county seat of the county the sheriff oversees ended up with officers wounded while the sheriff's MRAP stayed in its garage. So much for interdepartmental cooperation. It's confirmed. It's a shiny toy meant to entertain the smallest minds. Also children. It will only be used defensively in dangerous situations but probably not even then. The Sheriff's Department didn't need this vehicle. It wanted it and there was nothing standing in the way of obtaining it. Now it has it and it's not even using it for the things it should be using it for. Chances are, residents are going to have to protest something to see this MRAP loaded full of cops. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
What is it with the state of Georgia and its attempts to stifle free speech and a free press? It's the state that argues its official copy of the law is covered by copyright and cannot be posted online. The same state that is currently trying to regulate journalism by creating "ethical standards" they have to follow. The same state that is so bad in responding to public records law that an official was actually criminally charged for it? The latest, as sent in by a few people, is that tonight, Peachtree City, a suburb of Atlanta, is voting on a laughably obviously unconstitutional provision that would allow city officials to file bogus SLAPP suits, using taxpayer funds, against critics. Really. Specifically, the proposal says that the city will provide "coverage for legal expenses when a City official has been defamed in a public media outlet or otherwise slandered or libeled to the public..." It does note that the defamation must be a "valid claim for defamation... under Georgia law." So, one might argue that filing a bogus SLAPP suit wouldn't be covered by this policy -- but it's unclear how that will work. We see bogus defamation lawsuits filed all the time to censor critics, and as a public official, the bar to a successful defamation lawsuit is (for very good reasons) quite high. So, under this proposal, will the city officials have to pay back the city treasury if such a case is tossed out? One would hope that's the case, but the text of the proposal has no language to that effect. The only language is has regarding reimbursement is that if the lawsuit is "settled in the City's favor, the City shall seek reimbursement for the actual legal costs incurred in successful pursuit of the defamation ruling by the person or persons committing the defamation." It has no provision for what happens when it turns out there wasn't defamation and the city just wasted taxpayer funds suing critics who didn't actually defame anyone. It is already dubious that any public official should ever be suing critics -- but to have taxpayers have to foot the bill for SLAPP suits is both deeply obnoxious and unconstitutional, that it seems perfect for Georgia. The city manager, Jon Rorie, is quoted in a few different articles about this, basically making the same extraordinarily bad point “I don’t think that someone should have the ability to come in and just said something, that I committed a crime, I don’t think it’s fair,” said Jon Rorie, the city manager of Peachtree City. Right. And if it's actually defamatory, then you can sue yourself. You don't need taxpayer funds to go after someone. In the other link (up above, towards the beginning of the article) Rorie gets even more ridiculous: “It’s a brave, new world. It’s not about people criticizing. It’s about being defamed,” Rorie said, noting that such defamation could come from a newspaper or any media, including social media. “People think they have the luxury of saying false things about people. No one has the right to say I (or anyone working or volunteering for the city) am corrupt and attack me publicly.” Actually, Jon, you're wrong. People absolutely do have the right to attack you (verbally) in public. And they can certainly make opinion-based statements, including arguing that actions are corrupt. To actually be defamation, they would need to be making false statements of fact, where they knew the statement was false (or recklessly disregarded the truth) and those statements had to actually harm your reputation. That doesn't seem to be the standard Rorie is laying out here. The journalist for the Citizen properly pointed out to Rorie that, even if this is all true, the bigger question is why should taxpayer funds go towards such lawsuits, and Rorie's answer is telling: Rorie was asked why use taxpayers dollars to sue an individual. Rorie responded, saying he did not know the answer to that question, adding that the topic is worth discussing in a public meeting and, hence, was put on the agenda. Seems like the kind of thing you should think about before pushing such a resolution, no? Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
The U.S. House Judiciary Committee held a hearing this week to discuss the spread of white nationalism, online and offline. The hearing tackled hard questions about how online platforms respond to extremism online and what role, if any, lawmakers should play. The desire for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly in the wake of the recent massacre in New Zealand. But unfortunately, looking to Silicon Valley to be the speech police may do more harm than good. When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors. Similarly, when Congress considers regulating speech on online platforms, it must consider both the First Amendment implications and how its regulations might unintentionally encourage platforms to silence innocent people. Again and again, we’ve seen attempts to more aggressively stamp out hate and extremism online backfire in colossal ways. We’ve seen state actors abuse flagging systems in order to silence their political enemies. We’ve seen platforms inadvertently censor the work of journalists and activists attempting to document human rights atrocities. But there’s a lot platforms can do right now, starting with more transparency and visibility into platforms’ moderation policies. Platforms ought to tell the public what types of unwanted content they are attempting to screen, how they do that screening, and what safeguards are in place to make sure that innocent people—especially those trying to document or respond to violence—aren’t also censored. Rep. Pramila Jayapal urged the witnesses from Google and Facebook to share not just better reports of content removals, but also internal policies and training materials for moderators. Better transparency is not only crucial for helping to minimize the number of people silenced unintentionally; it’s also essential for those working to study and fight hate groups. As the Anti-Defamation League’s Eileen Hershenov noted: To the tech companies, I would say that there is no definition of methodologies and measures and the impact. […] We don’t have enough information and they don’t share the data [we need] to go against this radicalization and to counter it. Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines to help align platform moderation practices to human rights and civil liberties principles. The Principles ask platforms to be honest with the public about how many posts and accounts they remove, to give notice to users who’ve had something removed about what was removed, and under what rule, and to give those users a meaningful opportunity to appeal the decision. Hershenov also cautioned lawmakers about the dangers of heavy-handed platform moderation, pointing out that social media offers a useful view for civil society and the public into how and where hate groups organize: “We do have to be careful about whether in taking stuff off of the web where we can find it, we push things underground where neither law enforcement nor civil society can prevent and deradicalize.” Before they try to pass laws to remove hate speech from the Internet, members of Congress should tread carefully. Such laws risk pushing platforms toward a more highly filtered Internet, silencing far more people than was intended. As Supreme Court Justice Anthony Kennedy wrote in Matel v. Tam (PDF) in 2017, “A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all.” Republished from the EFF's Deeplinks blog. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
We've talked a lot over the years about the importance of Section 230 of the Communications Decency Act (CDA) in helping to create and enable the internet and all of the free speech on the internet. Expect us to continue to talk about it as it is increasingly under attack. Professor Eric Goldman has now released a short, and very worth reading, paper about Section 230, with the provocative title: Why Section 230 Is Better Than the First Amendment. The importance here is that many have argued that CDA 230 and the 1st Amendment go hand in hand. At times, in the past, I've argued that in a reasonable world we shouldn't even need a CDA 230, because the proper application of liability should obviously be with the person posting the law-breaking content, rather than the platform hosting it. But, that was clearly talking about in an idealistic world that does not exist. Given the frequency of lots of people -- plaintiffs, journalists, politicians, and more -- going after platforms for actions of their users, CDA 230's broad immunity is absolutely necessary if we're to have free speech online. Goldman's paper makes this clear: If the First Amendment mirrors Section 230’s speech protections, narrowing Section 230 would be inconsequential. This Essay explains why that’s not the case. Section 230 provides defendants with more substantive and procedural benefits than the First Amendment does. Because the First Amendment does not backfill these benefits, reductions to Section 230’s scope pose serious risks to Internet speech. Goldman's paper lays out the argument very clearly (it's very readable for an academic paper). He notes that Congress has passed many laws that are "speech enhancing" beyond the 1st Amendment, which are often designed to make sure that the 1st Amendment is actually useful, rather than illusory. For example, he discusses things like shield laws for journalists, anti-SLAPP laws, and the recent Consumer Review Fairness Act, that bars companies from banning consumer reviews. As Goldman notes, there are reasons to go beyond the 1st Amendment to better protect speech: The justification for speech-enhancing statutes is clear when the laws extend the First Amendment. For example, anti-SLAPP laws and defamation retraction-demand statutes create procedural hurdles to speech-related lawsuits that the First Amendment does not require. The CRFA governs private vendor-customer contracts, which typically do not receive First Amendment scrutiny at all. The key point that Goldman makes is that most of these laws provide procedural benefits. That is, what good are your free speech rights, if someone can abuse legal processes to silence you. The important elements of things like anti-SLAPP laws and CDA 230 are in how they stop bogus lawsuits early and at a lower cost point than the 1st Amendment alone. Section 230(c)(1)’s early dismissals are valuable to defendants. They reduce the defendant’s out-of-pocket costs to defeat an unmeritorious claim. For smaller Internet services, defending a single protracted lawsuit may be financially ruinous. Also, complex litigation can divert substantial managerial and organizational attention and mindshare from maintaining or enhancing the service. Thus, the ability of a defendant to resolve a case on a motion to dismiss (and avoiding expensive discovery) protects small and low-revenue Internet services; which in turn enhances the richness and diversity of the Internet ecosystem. But, of course, it's not just about defeating bogus lawsuits. There are wider benefits to this procedural expeditiousness, including much better protection of free speech online than would happen otherwise: Section 230(c)(1)’s early dismissals also benefit society in several ways. First, from a judicial economy standpoint, they save both parties from wasting valuable resources on doomed litigation. They also take meritless litigation off court dockets, freeing up the courts to handle other cases more carefully or quickly. Second, Internet services rarely make a lot of money from any single item of third-party content, so they lack financial incentives to stand behind individual items. Also, the services often lack the facts sufficient to properly defend third-party content in court. Accordingly, the most economically rational decision for most Internet services is to capitulate to any lawsuit over UGC—or avoid the lawsuit altogether by quickly removing third-party content in response to pre-litigation demands, without any investigation or pushback. This causes “collateral censorship,” i.e., the proactive removal of legitimate content as a prophylactic way of reducing potential legal risk and the associated potential defense costs. Unmeritorious quick removals are common in online copyright law,57 because the UGC copyright safe harbor58 is less defendant-favorable than Section 230.59 In contrast, Internet services routinely stand up to non-copyright legal threats, legal demands, and cease-and-desist letters targeting UGC—because Section 230 provides them legal certainty at a relatively low cost. And, of course, if everyone had to rely on using the First Amendment to defend such lawsuits, it would be a lot more time consuming and a lot more expensive: Unlike Section 230, Constitutional litigation is rarely quick or cheap. In particular, courts are reluctant to resolve Constitutional arguments on motions to dismiss. Further, Constitutional doctrines often raise sufficient factual questions that courts wait until summary judgment (or later) before disposing of an unmeritorious case. Thus, Internet services will expect it to cost less to defend UGC via Section 230 than the First Amendment, which makes the services more willing to stand up for their users. And if Section 230 and the First Amendment both equally dictate defense wins, society as a whole benefits from reaching that result as quickly and cheaply as possible. Importantly, Goldman points out that this kind of protection probably helps marginalized communities protect their speech the most. This is partly why it's so annoying that so many people seeking to attack CDA 230 lately have been claiming to do so on behalf of marginalized communities. The services’ Section 230-aided commitment to their UGC especially benefits content from marginalized communities. Not only are marginalized voices more likely to be targeted by people in positions of power, but Internet services are less likely to worry about the consequences of removing content from marginalized communities. Compared to the First Amendment, Section 230 helps keep online the most “at risk” legitimate content. I've seen some critics of CDA 230 already criticizing Goldman's paper, not on substance (because, how could they), but by misrepresenting it as suggesting that CDA 230 somehow supersedes the 1st Amendment. That's not what he's saying at all. What he argues -- clearly, carefully, and with great detail -- is that the procedural benefits of CDA 230 are vast and immense and are not simply replicated by the 1st Amendment should Congress continue to chip away at CDA 230. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
Unlike other eLearning courses that bog you down with dull voiceovers and boring videos, the Excel Data Analyst Certification School features real, hands-on projects to turn you into an Excel master; and you'll even have access to your own personal mentor to guide you along the way! You'll explore data manipulation, analytics and problem-solving, produce data visualizations and business intel reports, and much more. Complete the bootcamp, and you'll emerge with an interview-ready portfolio and a CPD accredited certification to back up your know-how. It's on sale for $49. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
Former FBI director James Comey's move to the private sector has been… well… annoying, if we're honest. After being booted by President Trump for allegedly failing to pledge his fealty to the Oval Office throne, Comey has become a hero of the so-called Resistance. Those lionizing Comey as some sort of truth-to-power speaker seem to have forgotten he ignored everything ever about pre-election propriety to announce his reopening of the investigation into Hillary Clinton's private email server, and his years spent trying to undermine encryption. You can take a man out of the FBI, but you can't take the g-man out of the man. Comey may be as unimpressed as many of us are with the current White House leadership, but that only makes him somewhat relatable, not some hero molded from the fires of the long tradition of reshuffling agency leadership with every peaceful transfer of power. Comey will speak to whoever will listen and/or publish his thoughts. He recently spoke at a conference and offered up his limited apologies for the War on Encryption he waged following the San Bernardino shooting. As apologies go, it isn't one. Comey says the only error he made was being a bit too aggressive when seeking to undermine the security of millions of device users. (h/t Riana Pfefferkorn) Comey said it was "dumb" to launch the encryption debate by loudly criticizing companies for seeking encryption that would prevent law enforcement access even with a warrant. "I would do that differently if I had the chance," he said at a conference hosted by the Hewlett Foundation last week. Beyond that, Comey wouldn't have changed much. And his stance is still firmly anti-encryption. Sure, it sounds like he thinks encryption is important, but the only version he'd be willing to live with if he were still running the FBI would be a version no one would trust. Comey says, "you could build a key that sits with the U.S. government, a key that sits with the maker of that device and a key that sits with a non governmental agency" and a judge could order these keys to be combined to grant access to the data. He argued such a model could still be built despite widespread criticism from technologists who think such a solution would be impossible or insecure. This proves Comey still unwilling to be the adult in the room, even as he repeated his assertion that it's all he really wants: an "adult conversation." Plenty of adults have spoken, contradicting Comey's fervent, but unfounded, beliefs that compromised encryption is still secure encryption. Comey says it's time for the U.S. to have an "adult conversation" about what's at stake as more and more devices and services are encrypted. He warns that "broad swaths" of American life are now occurring out of the reach of law enforcement, and he's worried that the public isn't talking enough about the implications that could have for society. Whatever. As far as I can tell, law enforcement is doing just fine. The stuff that's encrypted doesn't appear to be much of a problem. The FBI is having no problem radicalizing troubled youths into DOJ prosecution fodder. The ATF is still running stash house stings, turning poor people into federal inmates for thinking about robbing a fake drug stash house of its nonexistent drugs. The DEA is still spending a great deal of time looking for cash, rather than drugs. And local law enforcement is doing the same thing, concentrating on asset forfeiture, SWAT team raids, and talking people into having sex with officers pretending to be 14-year-old girls. Not really seeing the problem encryption poses for the law enforcement in any of these endeavors. Comey isn't here to speak truth to power or expound on the virtues of the rule of law. He isn't even truly apologetic for his heavy-handed anti-encryption rhetoric over the past few years. He wants people to believe he's a paragon of virtue, thanks to his unceremonious ouster. But he's still the same guy who used to run the FBI and he still has the same goals. Getting fired hasn't made him a better person and it sure as shit hasn't made him a hero. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
The late 2017 DOJ announcement that it would be suing to stop AT&T's $86 billion merger with Time Warner turned more than a few heads. While the DOJ insisted that the move was driven entirely by an interest in protecting consumers, the decision was utterly discordant with the Trump administration's often facts-optional assault on consumer protections that have bipartisan support, ranging from net neutrality to basic environmental protections. And the DOJ's sudden concern about the impact of media consolidation was in stark contrast to Trump's FCC, where demolishing decades-old media consolidation rules has been a top priority. At the time of the lawsuit, many wondered if some other motivations were really at play. After all, Rupert Murdoch had been pushing Trump for more than a year to scuttle the deal for anti-competitive reasons. Time Warner rejected a News Corp. acquisition offer in 2014, and more recently AT&T rebuffed the company's attempt to buy CNN... twice. Time Warner employees quoted at the time believed Murdoch was the driving motivation for the political pressure to quash the deal: "According to executives I spoke with, the theory is that Murdoch privately encouraged Trump to scuttle the deal as revenge for Time Warner rejecting Murdoch’s $80 billion takeover offer in 2014. “A direct competitor, who was spurned from buying us, perhaps is trying to influence the judicial process? That’s corruption on top of corruption,” one Time Warner executive told me." One obvious theory is that Murdoch convinced Trump to scuttle the deal as a competitive favor, perhaps with an eye on using the deal to force AT&T to divest CNN to Murdoch as a merger condition. This could have theoretically played well within Trumpland given Trump's personal disdain for critical CNN coverage, something that wouldn't be a problem under Murdoch control. Proving that of course is something else entirely, since it's unlikely that, were it true, anybody involved would put such an arrangement in writing. Fast forward to last month, when a New Yorker piece on the Trump White House's close ties to News Corporation included this notable but overlooked bit: "...in the late summer of 2017, a few months before the Justice Department filed suit, Trump ordered Gary Cohn, then the director of the National Economic Council, to pressure the Justice Department to intervene. According to a well-informed source, Trump called Cohn into the Oval Office along with John Kelly, who had just become the chief of staff, and said in exasperation to Kelly, “I’ve been telling Cohn to get this lawsuit filed and nothing’s happened! I’ve mentioned it fifty times. And nothing’s happened. I want to make sure it’s filed. I want that deal blocked!” That prompted House Judiciary Committee Chairman Jerrold Nadler and Representative David Cicilline to demand the White House turn over any correspondence that could help clear things up. But this week, the White House refused, insisting that all correspondence with advisors was protected: "In a letter dated Monday and released on Tuesday by Cicilline, White House counsel Pat Cipollone declined to release any documents, saying he would not provide “protected communications between the president and his senior advisers that are the very core of the executive branch’s confidentiality interests." Cipollone added that the Justice Department would be responding “in due course.” Again, it remains entirely possible that Trump simply wanted the deal spiked out of spite for CNN. But Murdoch's involvement here would explain a lot. The Trump DOJ of course proceeded to do a face plant at trial thanks in part to a Luddite Judge, but also thanks to the DOJ lawyers' failure to mention net neutrality once at trial. It was a rather important contextual omission given AT&T's plans to use both its ownership of "must have" media (like HBO), and its domination of broadband networks anti-competitively. Was the DOJ just clueless or did it just not want to highlight the perils of the Trump FCC killing net neutrality? It's certainly not out of the realm of possibility that the DOJ tried to stop the deal without any external motivation, but it would certainly be nice to see more data clearing that up one way or another. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
Okay, let's start with this even higher up this time: Dear Google RTBF reviewer (who I'm sure will be reading this soon): Under the terms of the GDPR, you are only supposed to agree to a content removal if what we are publishing is "personal data" that is no longer necessary, and (importantly, please read this, guys) that is not "for exercising the right of freedom of expression and information" or "for archiving purposes in the public interest, scientific or historical research purposes...." This post, like the last few, are news stories that are in the public interest, specifically about how someone is abusing the GDPR's "right to erasure" process to delete news reports about his abuse of the GDPR "right to erasure" process. This story is not about anything earlier that Thomas Goolnik may or may not have done. It is about what he did within the last few days. It is not old. It is not no longer relevant. It is directly relevant, and this post should not be subject to any GDPR right to erasure claims. Dear Thomas Goolnik: Seriously dude? How much longer is this going to go on? It is legal for a news report to mention your name. We're not even talking about the original think you want forgotten. We're talking about what you've been up to the past few years trying to get everyone to forget the thing you want forgotten. Maybe let it go. Right. So if the rest of you hadn't guessed by now, Thomas Goolnik has, once again, successfully convinced Google to "erase" our most recent article about Thomas Goolnik getting Google to delete a previous article about Thomas Goolnik getting Google to delete a previous article about Thomas Goolnik getting Google to delete a previous article from its search results on the name Thomas Goolnik in the EU. Even if one were to agree that the original articles he wanted delisted from searches under his name (which began with a NY Times article from 2002, which we don't believe should have been delisted under the RTBF guidelines in the EU), the fact that Goolnik continues to get more modern articles about his abuse of the RTBF process delisted seems problematic. It seems like the sort of thing that is very much in the public interest to monitor and report on, seeing as many supporters of the GDPR insist that the RTBF process would not, in fact, be used to censor news stories. It is being used to do exactly that. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
Way back in late 2016, we asked the same question that has been on the minds of all of humanity for eons: who gets to trademark Iceland? If that seems like an odd question to you, perhaps a little context will help. See, Iceland has been a sovereign nation since the early 1900s, whereas Iceland Foods has been a grocery chain in the UK since the 1970s. And, yet, somehow the latter managed to get an EU-wide trademark for the term "Iceland" and then went around bullying companies from Iceland out of using that term in their own names, even when they weren't competing in the grocery marketplace. How did the EU manage to think it would be okay to grant this trademark in the first place, you ask? By not putting a whole lot of thought into it, would be my guess. Well, when Iceland, the country, applied for a trademark for "Inspired by Iceland", only to have it blocked by Iceland Foods, it apparently represented the last straw. Iceland petitioned the EU to invalidate this absurd trademark, leading to reps from Iceland Foods trekking to meet with the nation's officials. The outcome of that meeting was apparently Iceland Foods being totally confused as to why Iceland wasn't just being cool, maaaaan. Well, this story has finally reached its conclusion, and that conclusion is the EU reversing its original error and invalidating the trademark. Now, years later, EUIPO has ruled in favour of Iceland – the country – and invalidated the supermarket’s trademark entirely, noting that “It has been adequately shown that consumers in EU countries know that Iceland is a country in Europe and also that the country has historical and economic ties to EU countries, in addition to geographic proximity.” Foreign Minister Guðlaugur Þór Þórðarson said he welcomed the ruling, but was not surprised by it. “…[I]t defies common sense that a foreign company can stake a claim to the name of a sovereign nation as was done [in this case],” he remarked. Well... yeah. That's right. The idea that the EU granted a trademark for the name of a nation within the European Economic Area is the kind of thing that proves it's impossible to write parody any longer. Sure, Iceland isn't officially in the EU, but trademark law has always cast narrow eyes at applications for terms that represent geography. None of this is new. Or difficult. Yet, for years Iceland Foods has been able to wield its absurd trademark against other businesses from Iceland, and against Iceland's government itself. Now, Iceland Foods has the option to appeal the ruling over the next couple of months. I can't imagine it will do so, though I wouldn't have guessed one could trademark "Iceland" to begin with, so... Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
One of the issues that we've discussed quite a bit on Techdirt over the years is the lengths that some people want to go to to hide court records and important public documents. The main story on this past weekend's Last Week Tonight with John Oliver tackled this issue in relation to Richard Sackler, the former chairman and president of Purdue Pharma, the company that developed and promoted Oxycontin. Much of the episode focused on questionable things said or done by Sackler, but towards the end, Oliver notes that Sackler has done an amazing job hiding from public scrutiny. There are very few pictures of him even online and no real videos they could find. Most of the Sackler family has done its very best to avoid publicly talking about the marketing of Oxycontin, or the astounding mess it has created for the world (though, some members of the family have recently been complaining about guilt by association). However, a few years ago, in a lawsuit over the marketing of Oxycontin, Richard Sackler was forced to give a deposition in the case, which has been held under seal. Somehow, ProPublica was able to get its hands on the transcript of the deposition and published it back in February. Since then the family has been fighting against the release of the actual video recording of Sackler's deposition. There is tremendous public interest in this as Oliver explains in the video above, and ProPublica wrote about upon the release of the document: As part of the settlement, the Kentucky attorney general agreed to destroy its copies of 17 million pages of documents produced during the eight-year legal battle with Purdue. But some of the same documents remained in a sealed file in a rural eastern Kentucky courthouse. STAT filed a motion in 2016 asking the judge in that case to make the documents public, and he ordered the unsealing of those documents, including the Sackler deposition. “The court sees no higher value than the public (via the media) having access to these discovery materials so that the public can see the facts for themselves,” Pike Circuit Court Judge Steven Combs ruled in May 2016. Purdue appealed the ruling to the Kentucky Court of Appeals, which upheld it in December 2018. The company then asked the state Supreme Court to review that decision. ProPublica also notes that this "is believed to be the only time a member of the Sackler family has been questioned under oath about the illegal marketing of OxyContin and what family members knew about it." That's why the transcript is so important. However, as Oliver notes, the Sacklers have continued to fight the release of the video and various other documents related to the case -- so to "help out," he brought together a group of talented actors to act out parts of the deposition and put them up on the website SacklerGallery.com -- a nod to the fact that the Sacklers have been getting lots of museums to name galleries and wings and other things after them. The actors include Bryan Cranston, Michael Keaton, Richard Kind, and Michael K. Williams. I'll leave it to John Oliver in the video above to explain why each of them are used, because it's truly wonderful. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
When Google Fiber launched in 2010, it was lauded as a game changer for the broadband industry. Google Fiber, we were told, would revolutionize the industry by taking Silicon Valley money and disrupting the viciously uncompetitive and anti-competitive telecom sector. Initially things worked out well; cities tripped over themselves offering all manner of perks to the company in the hopes of breaking free from the broadband duopoly logjam. And in markets where Google Fiber was deployed, prices dropped thanks to this added competition. The fun didn't last. In late 2016 Alphabet began getting cold feet about the high costs and slow return of the project, and effectively mothballed the entire thing -- without admitting that's what they were doing. The company blew through several CEOs in just a few months, laid off hundreds of employees, froze any real expansion, and cancelled countless installations for users who had been waiting years. And while Google made a lot of noise about how it would be shifting from fiber to wireless to possibly cut costs, those promises so far appear stuck in neutral as well. The mess created by this abrupt about face was felt most in cities like Louisville, which had tripped over themselves to please Google. After passing a bunch of new pole attachment rules and fending off an AT&T lawsuit over said rules, Google suddenly left the city high and dry, announcing last February they'd be retreating from the city. A big reason for that retreat is that Google subcontractors had screwed up the fiber microtrenching (burying fiber just a few inches below the road) it was using as an alternative to using city (and AT&T) utility poles. There's plenty of animosity in Louisville about Google's sudden retreat, though the company made partial amends this week by paying $3.8 million in a bid to clean up the mess left in the company's wake: "Google Fiber will pay $3.84 million to Louisville Metro Government (LMG) to restore roads and other public rights-of-way affected by its departing service in Louisville. Louisville Metro Government and Google Fiber agreed to these payments to fulfill the company’s obligations under its franchise agreement and local regulations, which require restoration of rights-of-way should a service provider end service in Louisville. Citing technical challenges, Google Fiber announced its exit from Louisville in February." Google also wound up paying a $150,000 cash donation to the Community Foundation of Louisville’s Digital Inclusion Fund to support local digital education efforts. While that closes the book on Louisville's fiber aspirations, Google still has a problem on its hands. It's clear to everybody watching that the company no longer is really interested in disrupting telecom, but publicly keeps trying to tell customers and the press that nothing has truly changed. But with the company's fiber efforts frozen and its wireless pivot apparently going nowhere (in fact it appears to have shrunk since it acquired Webpass), Google needs to either come clean about its waning interest (and likely sell the project off) or explain why -- if the project is still important to the company -- the entire effort has been stuck in neutral for several years. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
A bit of deja vu here. Once again, the EU Parliament has done a stupid thing for the internet. As we've been discussing over the past few months, the EU has been pushing a really dreadful "EU Terrorist Content Regulation" with the main feature being a requirement that any site that can be accessed from the EU must remove any content deemed "terrorist content" by any vaguely defined "competent authority" within one hour of being notified. The original EU Commission version also included a requirement for filters to block reuploads and a provision that effectively turned websites' terms of service documents into de facto law. In moving the Regulation to the EU Parliament, the civil liberties committee LIBE stripped the filters and the terms of service parts from the proposal, but kept in the one hour takedown requirement. In a vote earlier today, the EU Parliament approved the version put for by the committee, rejecting (bad) amendments to bring back the upload filters and empowering terms of service, but also rejecting -- by just three votes -- an amendment to remove the insane one hour deadline. Since this version is different than the absolutely bonkers one pushed by the European Commission, this now needs to go through a trilogue negotiation to reconcile the different versions, which will eventually lead to another vote. Of course, what that vote will look like may be anyone's guess, given that the EU Parliamentary elections are next month, so it will be a very different looking Parliament by the time this comes back around. Either way, this whole concept is a very poorly thought out knee-jerk moral panic from people scared of the internet and who don't understand how it works. Actually implementing this in law would be disastrous for the EU and for internet security. The only way, for example, that we could comply with the law would be to hand over backend access to our servers to strangers in the EU and empower them to delete whatever they wanted. This is crazy and not something we would ever agree to do. It is unclear how any company -- other than the largest companies -- could possibly even pretend to try to comply with the one hour deadline, and even then (as the situation with the Christchurch video showed) there is simply no way for even the largest and best resourced teams out there to remove this kind of content within one hour. And that's not even touching on the questions around who gets to determine what is "terrorist content," how it will be abused, and also what this will mean for things like historical archives or open source intelligence. This entire idea is poorly thought out, poorly implemented and a complete mess. So, of course, the EU Parliament voted for it. Hopefully, in next month's elections we get a more sensible cohort of MEPs. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
Saying that Section 230 of the Communications Decency Act (CDA 230) is a "gift" to internet companies that should be taken away because some people use the internet badly is like saying the interstate highway system is a "gift" to the big shipping companies, and should be destroyed because some people send illegal things via UPS or Fedex. As Section 230 is increasingly under attack, one of the most common lines we hear about it is that it was somehow a "gift to internet companies." I heard something along those lines at least three times last week, not even counting Nancy Pelosi's misguided characterization of 230, in which she said: “230 is a gift to them, and I don’t think they are treating it with the respect that they should,” she said. “And so I think that that could be a question mark and in jeopardy. ... For the privilege of 230, there has to be a bigger sense of responsibility on it, and it is not out of the question that that could be removed.” Except, as we noted last week, this gets the entire story backwards. The point of Section 230 is not to benefit the big internet companies. It is to benefit the public. It has enabled them to speak freely on the internet, because Section 230 has freed up the ability of platforms to host user-generated content without fear of being held liable for it. Do some people post awful (or even illegal) things? Absolutely. But just as we don't demand smashing up the interstate highway just because some drug dealers ship drugs via Fedex, we shouldn't demand the government rip up Section 230. The overwhelming beneficiaries of Section 230 are the public. It has -- incidentally -- helped some internet companies stay out of some misguided and often vexatious legal threats by simply stating that any legal action should be directed at those actually responsible. That's not a "gift" -- it's a protection against frivolous, misguided lawsuits. So the next time you see people claiming that Section 230 was gift to the internet companies, please remind them it's not at all true -- but rather that Section 230 was a gift to the public, enabling more freedom of expression online, and enabling the internet to take root. Ripping up 230 because of a few examples of bad content online would be like ripping out the interstate highway system to prevent anyone from shipping drugs. It is both a massive overreaction and a totally misdirected one. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
TREBLAB Z2 Wireless Noise-Cancelling Headphones feature top-grade, high-performance neodymium-backed 40mm speakers. The Z2s use T-Quiet active noise canceling technology to drown out unwanted background noise and have a signal range of 38 feet. With a 35 hour battery life, you can listen for multiple days between charges. They're on sale for $79. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...