posted 1 day ago on techdirt
Five Years Ago This week in 2015, we got a big, confusing mess of a ruling on fair use and the DMCA in the famous "dancing baby" video lawsuit. We also saw a loss for the Motion Picture Academy after its five-year crusade to make GoDaddy pay for "infringing" websites, and the owner of the Miami Heat was hit with $155,000 in legal fees after losing his bogus copyright lawsuit against a blogger. Meanwhile, China was beginning a big push to get American tech companies to agree to its rules, while the DOJ was backing down from charges against a professor driven by China hysteria. Ten Years Ago This week in 2010, Yelp got yet another Section 230 victory against an attempt to hold it liable for bad reviews, while a reputation management company was threatening to launch a similar lawsuit against TripAdvisor in the UK, in what appeared to be a publicity stunt. A terrible appeals court ruling was killing the first sale doctrine, while Craigslist was engaged in a fight with South Carolina's attorney general and we wondered why other internet companies weren't standing up for it. And the latest big DRM-breaking event happened with the apparent leak of the HDCP master key which was soon confirmed by Intel. Fifteen Years Ago This week in 2005, the fights over online reviews were in their infancy, with doctors leading the charge. Ebay spent an eyewatering amount of money to purchase Skype, and we noted this meant the company needed to become an expert on net neutrality, fast. The RIAA was going around overstating the results of the Grokster case, while the courts in Taiwan were contradicting an earlier ruling on the legality of file sharing software by sending file sharing executives to jail. And Lego was suing a Danish artist for using her middle name — "Lego" — to sign her paintings.

Read More...
posted 1 day ago on techdirt
Veteran Techdirt readers will have been so tempered by stories about Monster Energy playing the trademark bully at this point that the mere mention of the company should cause them to roll their eyes. Still, the history of what we've covered in the Monster's attempt to win the trademark-protectionist championship are still constructive in one very important way: Monster Energy regularly loses these disputes. That in itself shouldn't be terribly surprising; the company's decisions on just how often to enforce the trademark rights it has are often so absurd that it would be a shock if it put together any sort of real winning streak. But what is surprising is when victims of Monster's bullying choose to actually concede to the bullying, given that losing track record. But it happens, even when the victim is a large enough entity that it could fight if it wanted to. A recent example of this is how Ubisoft changed the name of an upcoming video game after Monster Energy opposed its trademark application for it. Ubisoft's Gods & Monsters recently underwent some rebranding, switching its name to the demonstrably-worse Immortals Fenyx Rising a few weeks ago. It has gone over like a lead balloon. In fact, it had our team wondering if we should just refuse the new name and stick with the old one! As uncovered by TechRaptor, Monster Energy opposed Ubisoft's trademark for the title "Gods & Monsters." The logic goes that Monster has enough of a presence within video games that Ubisoft's use could reasonably cause confusion among consumers. Logic which runs counter to the purpose of trademark law, to how trademark law actually works in terms of market designations, as well as to good business and marketing. Taking those in reverse order: the name change is almost objectively terrible. I have yet to find any publication that thinks the title switch was even a wash for Ubisoft, never mind beneficial. The universal opinion seems to be, and I agree with it, that Ubisoft to one extent or another participated in a bit of self-harm by this rebranding. Now, on to the actual legal question. The consensus here too seems to be that Ubisoft could have easily have won this battle on the merits, but didn't want to simply to avoid any delay stemming from a legal battle. Playing armchair attorney, this seems like something Ubisoft probably could've won, no? My guess is that it has less to do with whether or not Ubisoft cared to spend the money on this legal battle, and more to do with just getting the game out on shelves. Immortals has been delayed already, and its sales factor into Ubisoft's fiscal year that ends in March 2021. Fighting a protracted trademark infringement case would further delay the game. Going ahead with the name Gods & Monsters would result in an injunction. Ubisoft may be in the right, but it doesn't have the time to prove it. Which is all probably true, but only if Ubisoft couldn't have gotten a declaratory judgement when Monster Energy first opposed the trademark application. Because it is quite clear that there is no infringement here. Whatever participation Monster Energy has in the video game space, most of which is mere sponsorship and advertising, it still isn't a maker of video games. Ubisoft should have needed merely to point that out to get its use declared legit. Couple that with the broader question as to whether literally anyone would make the association between a video game called Gods & Monsters and an energy drink company and I would guess getting a court to side with it would have been fairly easy for Ubisoft. But Ubisoft decided against that route and bowed to Monster Energy's bullying. Which is how we get Immortals Fenyx Rising instead of Gods & Monsters. An objectively worse name. For no reason, other than trademark bullying. Cool.

Read More...
posted 2 days ago on techdirt
Summary: In the early 1990s, facing increased pressure from the commercial sector who sensed there might be some value in the nascent “Internet,” the National Science Foundation began easing informal restrictions on commercial activity over the Internet. This gave rise to the earliest internet companies -- but also to spam. Before the World Wide Web had really taken off, the place where a great deal of internet communication took place was Usenet, created in 1980, which was what one might think of as a proto-Reddit, with a variety of “newsgroups” dedicated to different subjects that users could post to. Usenet was a decentralized service based on the Network News Transfer Protocol. Users needed a Usenet reader, from which they would connect to any number of Usenet servers and pull down the latest content in the newsgroups they followed. In early 1994, a husband and wife lawyer team, Laurence Canter and Martha Siegel, decided that they would advertise their legal services regarding immigration to the US (specifically help with the infamous “Green Card Lottery” to get a green card to the US) on Usenet. They hired a programmer to write a perl script that posted their advertisement on 5,500 separate news groups. While cross-posting was possible (a single post designated for multiple newsgroups), this particular message was posted individually to each newsgroup, which made it even more annoying for users -- since most Usenet reader applications would have recognized the same message as “read” in different newsgroups if it had merely been cross-posted. Posting it this way guaranteed that many people saw the message over and over and over again. It is generally considered one of the earliest examples of commercial “spam” on the internet -- and certainly the most “successful” at the time. It also angered a ton of people. According to Time Magazine, Canter and Siegel faced immediate backlash: In the eyes of many Internet regulars, it was a provocation so bald-faced and deliberate that it could not be ignored. And all over the world, Internet users responded spontaneously by answering the Spammers with angry electronic- mail messages called "flames." Within minutes, the flames -- filled with unprintable epithets -- began pouring into Canter and Siegel's Internet mailbox, first by the dozen, then by the hundreds, then by the thousands. A user in Australia sent in 1,000 phony requests for information every day. A 16-year-old threatened to visit the couple's "crappy law firm" and "burn it to the ground." The volume of traffic grew so heavy that the computer delivering the E-mail crashed repeatedly under the load. After three days, Internet Direct of Phoenix, the company that provided the lawyers with access to the Net, pulled the plug on their account. It wasn’t just Usenet users. Immigration lawyers were also upset in part because Canter and Siegel were asking for money to do what most people could easily do for free: Unfortunately, it also provided an opportunity for charlatans to charge exorbitant fees to file lottery entries for hopeful immigrants. In truth, all it took to enter the drawing was a postcard with your name and address mailed to the designated location. Canter and Siegel, a husband-and-wife law firm, decided to join the lottery frenzy by pitching their own overpriced services to immigrant communities. The two were unrepentant, later claiming they made over $100,000 from the advertisement. They quickly set up a new company called “Cybersell” to do this for others -- and signed a contract to write a book for HarperCollins originally called "How To Make A Fortune On The Information Superhighway." Decisions to be made by Usenet server providers: Would they need to start being more aggressive in monitoring and moderating their newsgroups? Would it even be possible to prevent spam? Should they even carry news groups that allowed for open contributions? Decisions to be made by ISPs: Should they allow Canter and Siegel to use their internet access to spam newsgroups? How should they handle the backlash from users angry about the spam campaigns? Questions and policy implications to consider: What is the boundary between allowed commercial speech or advertising and spam? How do you distinguish it? Is it possible to have distributed systems (as opposed to centralized ones) that don’t end up filled with spam? What are the legal implications of spam? Resolution: Canter and Siegel remained a scourge on the internet for some time. Various service providers were quick to kick them off as soon as it was discovered that they were using them. Indeed, many seemed willing to talk publicly about their decisions, such as Netcom, which shut down their account soon after the original spam happened and after Canter and Siegel had announced plans to continue spamming: NETCOM On-Line Communications has taken the step of cancelling the service of Laurence Canter of Canter and Siegel, the lawyer commonly referred to as the "Green Card Lawyer". Mr. Canter had been a customer of NETCOM in the past. He had been cautioned for what we consider abuse of NETCOM's system resources and his systematic and willful actions that do not comply with the codes of behavior of USENET. Mr. Canter has been widely quoted in the print and on-line media about his intention to continue his practice of advertising the services of his law firm using USENET newsgroups. He has also widely posted his intention to sell his services to advertise for others using the newsgroups. We do not choose to be the provider that will carry his messages. That link also has notices from other service providers, such as Pipeline and Performance Systems, saying they were removing internet access. Others focused on trying to help Usenet server operators get rid of the spam. Programmer Arnt Gulbrandsen quickly put together a tool to help fight this kind of spam by “cancelling” the messages when spotted. This actually helped establish the early norm that it was okay to block and remove spam. As for Canter and Siegel, they divorced a couple years later, though both kept promoting themselves as internet marketing experts. Canter was disbarred in Tennessee for his internet advertising practices, though he had already moved on from practicing law. Cybersell, the company they had setup to do internet advertising, was apparently dissolved in 1998.

Read More...
posted 2 days ago on techdirt
Though it doesn't grab the same headline attention as the silly and pointless TikTok ban, the lack of security and privacy standards in the internet of things (IOT) is arguably a much bigger problem. TikTok is, after all, just one app, hoovering up consumer data in a way that's not particularly different from the 45,000 other international apps, services, governments, and telecoms doing much the same thing. The IOT, in contrast, involves millions of feebly secured products being attached to home and business networks every day. Many also made in China, but featuring microphones and cameras. Thanks to a laundry list of lazy companies, everything from your Barbie doll to your tea kettle is now hackable. Worse, these devices are now being quickly incorporated into some of the largest botnets ever built, resulting in devastating and historic DDoS attacks. In short: thanks to "internet of things" companies that prioritized profits over consumer privacy and the safety of the internet, we're now facing a security and privacy dumpster fire that many experts believe will, sooner or later, result in some notably nasty results. To that end, the House this week finally passed the Internet of Things Cybersecurity Improvement Act, which should finally bring some meaningful privacy and security standards to the internet of things (IOT). Cory Gardner, Mark Warner, and other lawmakers note the bill creates some baseline standards for security and privacy that must be consistently updated (what a novel idea), while prohibiting government agencies from using gear that doesn't pass muster. It also includes some transparency requirements mandating that any vulnerabilities in IOT hardware are disseminated among agencies and the public quickly: "Securing the Internet of Things is a key vulnerability Congress must address. While IoT devices improve and enhance nearly every aspect of our society, economy and everyday lives, these devices must be secure in order to protect Americans’ personal data. The IoT Cybersecurity Improvement Act would ensure that taxpayers dollars are only being used to purchase IoT devices that meet basic, minimum security requirements. This would ensure that we adequately mitigate vulnerabilities these devices might create on federal networks." Again, it's not going to get the same attention as the TikTok pearl clutching, but it's arguably more important. The IOT is a simultaneously a successful sector while at the same time suffering from a form of market failure. I come back a lot to this Bruce Schneier blog post because I think it explains IOT dysfunction rather well: "The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution." One problem is that consumers often don't know what they're buying because sellers aren't transparent, which is why groups like Consumer Reports have been working on an open source standard to include security and privacy issues in product reviews. Another big problem is that these devices are rarely designed with GUIs that provide transparent insight into what these devices are doing online. And unless users have a semi-sophisticated familiarity with monitoring their internet traffic via a router, they likely have no idea that their shiny new internet-connected doo-dad is putting themselves, and others, at risk. Fixing the IOT requires collaboration between consumers, vendors, governments, and security experts, and so far that coordination has been patchy at best. Instead of developing policies and standards that address an entire sector's worth of security and privacy problems, the U.S. adores hyperventilating about individual threats (see: TikTok) then pushing policies (see: the TikTok ban) that don't actually accomplish that much. U.S. data privacy and security is a problem that requires a much wider view, instead of this bizarre, inconsistent consternation that's more ADHD Whac-a-Mole than serious policy.

Read More...
posted 2 days ago on techdirt
The federal government's Office of Legal Counsel (OLC) tells government agencies what they can and can't do under existing law. Its interpretation of these laws may vary significantly from how they've been interpreted by courts. The OLC has been asked to justify everything from warrantless searches to extrajudicial killings. The bespoke law interpretations that justify these actions are then withheld from the public -- often for decades at a time. The OLC has refused to turn these over to FOIA requesters, citing a number of FOIA exemptions. It does this with older decisions as well -- ones Congress has said must be released to the public. 2016's amendment of the Freedom of Information Act prohibits agencies from withholding "deliberative" records -- which is much of what the OLC produces -- that are over 25 years old. The OLC violated this change in the law immediately, prompting a lawsuit by the Knight Institute that the Institute ultimately won. But it wasn't the only lawsuit brought against the OLC by the Knight Institute over FOIA violations. The OLC was also sued for violating the "reading-room provision," which obligates agencies to process and release certain documents, even in the absence of a FOIA request for these documents. The OLC has refused to do this. The court said the OLC's refusal to comply was good and lawful, but only for some subsets of its document stash. The litigation continued to determine what was exempt and what was subject to proactive release. In October 2017, the district court granted the government’s original motion to dismiss but afforded the Campaign for Accountability an opportunity to focus more narrowly on specific categories of OLC opinions. The Knight Institute filed an amended complaint highlighting several categories of OLC opinions — those (i) resolving interagency disputes; (ii) interpreting nondiscretionary legal obligations; (iii) finding particular statutes unconstitutional; and (iv) adjudicating or determining individual rights. The court has now handed down its ruling [PDF] and it agrees with the Knight Institute and its co-plaintiff, Campaign for Accountability (CfA) on one category of OLC opinions: [F]or now, the Court finds that CfA’s amended complaint contains a plausible allegation that OLC is required to make its opinions that resolve inter-agency disputes available for “public inspection” under section 552(a)(2) of the FOIA, for the reasons explained above, and that the other categories of OLC opinions identified in the amended complaint do not plausibly violate the FOIA’s reading-room provision. The court says these documents are likely "final opinions" (which would make sense, since they "resolve disputes") and subject to the proactive release obligations contained in the "reading-room provision." This could prompt a flood of releases. The Knight Institute estimates these resolution opinions make up about a quarter of all opinions sent by the OLC to other agencies. Then again, it may not result in much of anything. The OLC spent most of the Obama years watching its workload dwindle as agencies became more worried about the possibility of legal opinions being released to FOIA requesters than with ensuring their actions were lawful. OLC opinions dropped from ~30/year at the beginning of Obama's presidency to less than 10/year by 2015. The end result of years of litigation could be a small handful of opinions that won't do much to inform the public about how the OLC interprets existing laws. But the precedent set here is worth celebrating. An entire category of OLC opinions has been declared subject to proactive release by the Office. And that's a much-needed improvement.

Read More...
posted 2 days ago on techdirt
Famed law professor Alan Dershowitz is at it again. He's now suing CNN for defamation in a SLAPP suit, because he's upset that CNN did not provide an entire quote he made during the impeachment trial before the US Senate, claiming that because he was quoted out of context, it resulted in people believing something different than what he actually meant with a quote. Reading the lawsuit, the argument is not all that different from the defamation claim made by another Harvard Law professor, Larry Lessig, earlier this year, in which he accused the NY Times and a reporter there of defamation for taking his comments out of context. Lessig later dropped that lawsuit. In both cases, these law professors are effectively arguing that when they make convoluted arguments, you must include all of the nuances and context, or you might face defamation claims. That's incredibly chilling to free speech, and not how defamation law works. Dershowitz's complaint is that during the trial, he made the following claim: “The only thing that would make a quid pro quo unlawful is if the quo were somehow illegal. Now we talk about motive. There are three possible motives that a political figure could have. One, a motive in the public interest and the Israel argument would be in the public interest. The second is in his own political interest and the third, which hasn’t been mentioned, would be his own financial interest, his own pure financial interest, just putting money in the bank. I want to focus on the second one for just one moment. Every public official that I know believes that his election is in the public interest and, mostly you are right, your election is in the public interest, and if a president does something which he believes will help him get elected in the public interest, that cannot be the kind of quid pro quo that results in impeachment." Dershowitz is upset that CNN aired a segment that showed just that final sentence: Every public official that I know believes that his election is in the public interest and, mostly you are right, your election is in the public interest, and if a president does something which he believes will help him get elected in the public interest, that cannot be the kind of quid pro quo that results in impeachment. But here's the thing: CNN also did air the full segment. And Dershowitz admits this. He's just upset that at other times they only aired part of it, and that some commentators don't paraphrase it the way he wanted them to. Here's where he admits that CNN did, in fact, air the entire clip: Immediately after Professor Dershowitz presented his argument, CNN employees, Wolf Blitzer and Jake Tapper, played the entire clip properly, so CNN knew for certain that Professor Dershowitz had prefaced his remarks with the qualifier that a quid pro quo could not include an illegal act. That portion then disappeared in subsequent programming. It disappeared because the longer quote is long, and people were focused on the key part -- that final sentence. Many people -- including some on CNN -- mocked Dershowitz for those remarks. Because they're ludicrous. Even with the full paragraph. But the mockable part is the final sentence, and that's why it's news. And the CNN commentators who mocked it were commentators -- people paid to give their opinion on what Dershowitz said. But, as with Lessig's lawsuit, the complaint from Dershowitz is that commentator's opinions about what was said differs from what was meant. But opinions cannot be defamatory. And if people misinterpreted what Dershowitz said, that's on Dershowitz for not explaining it clearly enough. We're in a world of trouble if people get to sue for defamation every time someone misunderstands their poorly made argument. I can understand why it's frustrating for people to completely misunderstand your argument. It happens all the time to lots of people -- including myself. It happens quite often when people try to make carefully nuanced arguments. But misunderstanding, or even misrepresenting, a more nuanced argument is not defamation. And nothing in Dershwotiz's lawsuit changes that. Dershowitz's lawsuit hangs its hat on the Masson v. New Yorker Supreme Court ruling from 1991. Dershowitz's complaint describes that ruling as follows: ... the Court held that a media organization can be held liable for damages when it engages in conduct that changes the meaning of what a public figure has actually said. While Masson involved the use of quotation marks to falsely attribute words to Jeffrey Masson, the law that the case created is broad, and unequivocally denies first amendment protections to a media organization that takes deliberate and malicious steps to change the meaning of what a public figure has said. That is exactly what CNN did when it knowingly omitted the portion of Professor Dershowitz’s words that preceded the clip it played time and time again. This is... not an accurate portrayal of the Masson case or ruling. And, yes, I recognize that there's some irony in Dershowitz claiming its defamation to misrepresent himself while his lawsuit then misrepresents a key Supreme Court case that it relies on. The Masson case is a fun one to read. In involves an article (and then a book made out of the article) about an academic where it appears that the author didn't just selectively quote the academic, but made up quotes. The ruling compares the quotes in the article to the tape recordings of interviews to note just how different the quotes in the story are from what was actually said. That's... not what is happening here. It is true that one of the quotes in the Masson case involved selectively excising some of a quote, but that was done in a truly egregious way. It wasn't that they left out context, it was that they excised a middle portion, to make a later portion appear that it was referring to something much earlier, rather than what was excised. That is... not what happened to Dershowitz. Indeed, the Masson ruling works against Dershowitz in many ways. It actually says that you have to expect the press to take your long rambling comments and tighten them up, because that's part of journalism: Even if a journalist has tape-recorded the spoken statement of a public figure, the full and exact statement will be reported in only rare circumstances. The existence of both a speaker and a reporter; the translation between two media, speech and the printed word; the addition of punctuation; and the practical necessity to edit and make intelligible a speaker's perhaps rambling comments, all make it misleading to suggest that a quotation will be reconstructed with complete accuracy. The use or absence of punctuation may distort a speaker's meaning, for example, where that meaning turns upon a speaker's emphasis of a particular word. In other cases, if a speaker makes an obvious misstatement, for example by unconscious substitution of one name for another, a journalist might alter the speaker's words but preserve his intended meaning. And conversely, an exact quotation out of context can distort meaning, although the speaker did use each reported word. In all events, technical distinctions between correcting grammar and syntax and some greater level of alteration do not appear workable, for we can think of no method by which courts or juries would draw the line between cleaning up and other changes, except by reference to the meaning a statement conveys to a reasonable reader. To attempt narrow distinctions of this type would be an unnecessary departure from First Amendment principles of general applicability, and, just as important, a departure from the underlying purposes of the tort of libel as understood since the latter half of the 16th century. From then until now, the tort action for defamation has existed to redress injury to the plaintiff's reputation by a statement that is defamatory and false. In the Masson case, the Court did find that many of the changes to the text, including that one section, involved a "material" difference in meaning, and therefore could be found defamatory by a jury. But this case is very, very different than what Dershowitz is claiming about CNN. They didn't quote his whole line, but there is no requirement they quote his entire argument. Then there's the whole damages bit. According to Dershowitz, his reputation was damaged to the tune of $300 million because some people made fun of him on CNN, and it's all their fault that they didn't understand his poorly made argument. The fucking entitlement of this guy. The damage to Professor Dershowitz’s reputation does not have to be imagined. He was openly mocked by most of the top national talk show hosts and the comments below CNN’s videos show a general public that has concluded that Professor Dershowitz had lost his mind. Being mocked on TV is proof of damages? Really, now? How fragile is Dersh's ego here? Multiple times in the lawsuit, Dershowitz's lawyer (yes, he found an actual Florida man lawyer to file this lawsuit) talks about how only playing part of his long silly answer would lead people to believe that Dersh had "lost his mind": The very notion of that was preposterous and foolish on its face, and that was the point: to falsely paint Professor Dershowitz as a constitutional scholar and intellectual who had lost his mind. With that branding, Professor Dershowitz’s sound and meritorious arguments would then be drowned under a sea of repeated lies. If only airing one sentence of your preposterous argument makes you look like you've lost your mind, perhaps the problem is in how you frame your arguments. This is yet another SLAPP suit. Florida has an anti-SLAPP law, but it's a mixed bag in terms of how strong it is. Of course, as with many SLAPP suits, the real goal is likely to just be intimidation, rather than to actually win a vexatious nonsense lawsuit.

Read More...
posted 2 days ago on techdirt
The Ultimate All-Access Business Bundle has 12 courses to help you learn new business skills to boost your business towards success. You'll learn how to motivate employees, delegate tasks, manage personal finances, ace interviews, and more. The bundle is on sale for $35. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...
posted 2 days ago on techdirt
This morning the Commerce Department released the details of how the WeChat and TikTok bans will work. It's possible that the ban on TikTok will get lifted if Treasury Secretary Mnuchin can convince enough people in the administration to buy into the grifty Oracle non-sale, but the WeChat ban is happening no matter what. The details reinforce two key points: This is way unconstitutional and should be offensive to any 1st Amendment/free speech supporter. The excuses about national security are utter and total garbage, because this would actually make users of those apps significantly less secure. So, great. We have some applications bans, premised on national security, that are unconstitutional piles of garbage that make people less secure, and the only possible path out is through a grifty deal, pushed deliberately to a large donor to the President, who has said multiple times he's hoping for a kickback on the deal. We're witnessing an astounding bit of corruption right here. Here's how the "ban" will work. First up, both apps get banned from all US app stores. The following is listed as "prohibited." Any provision of service to distribute or maintain the WeChat or TikTok mobile applications, constituent code, or application updates through an online mobile application store in the U.S. That's basically saying: "Apple and Google can no longer put those apps in their app stores." There are 1st Amendment concerns here, in that the executive branch is telling software companies what code they can or cannot host. While the IEEPA law under which this order is being made is broad, this seems ripe for a huge 1st Amendment challenge. The President should not be able to simply ban code from app stores based on an unsubstantiated claim of "national security." Second, not only is this all based on unsubstantiated claims of national security, the very text proves how that's bullshit. The fact that these app stores can no longer issue updates means that people who have the apps currently can continue using them, but if there's a security update (say to patch a vulnerability) users can no longer patch those apps. If the goal of this ban is to "protect national security," everything here is exactly the opposite of that. Users will still have the app, but are unable to protect themselves and can only keep using the app if they accept the obsolete and increasingly less secure version of it. In other words: the whole "national security" claim is a total lie, because the way the ban is implemented gives Americans less security. That sure is one way to fight back against supposed Chinese surveillance through these apps. If it's even true that China is spying on people via apps, they're now in a "don't throw me in the briar patch" situation -- since the US government is forcing these apps to be less secure and to expose even more data to whoever has it. Another part of the ban that raises significant 1st Amendment issues is that it prohbits: Any utilization of the mobile application’s constituent code, functions, or services in the functioning of software or services developed and/or accessible within the U.S. Translating that: it means that no US developer can use WeChat or TikTok's APIs or build software using any of their code. That's deliberately interfereing with the speech of Americans. Leaving aside the issue of whether or not banning apps that allow for communications is a 1st Amendment issue. Leaving aside the issue of whether or not banning apps at all is a 1st Amendment issue. This goes even further: it says that US-based software developers cannot write the code they want. That's a huge 1st Amendment issue. I discussed this a few months ago but the Supreme Court has already said that code is speech in Brown v. Entertainment Merchants Association (the case about whether or not the government could regulate video games and require age warnings). And, while it's not the Supreme Court, the 2nd Circuit has been even more direct about code being speech protected by the 1st Amendment in the the Universal v. Corley case (about whether or not you could publish code that breaks DRM): Communication does not lose constitutional protection as "speech" simply because it is expressed in the language of computer code. Mathematical formulae and musical scores are written in "code," i.e., symbolic notations not comprehensible to the uninitiated, and yet both are covered by the First Amendment. If someone [*446] chose to write a novel entirely in computer object code by using strings of 1's and 0's for each letter of each word, the resulting work would be no different for constitutional purposes than if it had been written in English. The "object code" version would be incomprehensible to readers outside the programming community (and tedious to read even for most within the community), but it would be no more incomprehensible than a work written in Sanskrit for those unversed in that language. The undisputed evidence reveals that even pure object code can be, and often is, read and understood by experienced programmers. And source code (in any of its various levels of complexity) can be read by many more. Ultimately, however, the ease with which a work is comprehended is irrelevant to the constitutional inquiry. If computer code is distinguishable from conventional speech for First Amendment purposes, it is not because it is written in an obscure language. Later in that ruling: Computer programs are not exempted from the category of First Amendment speech simply because their instructions require use of a computer. A recipe is no less "speech" because it calls for the use of an oven, and a musical score is no less "speech" because it specifies performance on an electric guitar. Arguably distinguishing computer programs from conventional language instructions is the fact that programs are executable on a computer. But the fact that a program has the capacity to direct the functioning of a computer does not mean that it lacks the additional capacity to convey information, and it is the conveying of information that renders instructions "speech" for purposes of the First Amendment. Based on all of that, it is difficult to see how this broad ban can possibly stand up to 1st Amendment scrutiny on multiple levels. The banning of US developers coding using these companies APIs is a 1st Amendment violation. The ban on US companies hosting their code is a 1st Amendment violation. The ban on apps used for speech is likely a 1st Amendment violation (on par with breaking up printing presses). So, these bans appear to violate the 1st Amendment in multiple different ways. And for what? The claim is "to protect national security." We already knew that was bogus, and all of the info anyone can get from TikTok is already widely available for purchase. But now with the details coming out, in which it would make the data of US users of these services even less secure by banning updates, we have even more evidence that the national security claims are joke. And thus, the bans are likely unconstitutional on multiple different grounds, have no national security purpose based on multiple different problems with the deal, and don't seem to do anything other than potentially put a lucrative business deal in the pocket of a top Trump supporter. How is there anyone out there who thinks this is a reasonable thing?

Read More...
posted 2 days ago on techdirt
As economists and experts had warned, the recent $26 billion Sprint T-Mobile merger effectively decimated the prepaid space. T-Mobile had already laid off around 6,000 employees at its Metro Prepaid division, with more layoffs expected. Many of the "mobile virtual network operators" that operated on Sprint's network now face an uncertain future, with growing resentment in the space among prepaid vendors, who say T-Mobile is already using its greater size and leverage to erode commissions and to renegotiate their contracts for the worse. Many prepaid vendors are calling for help that most certainly won't be coming any time soon from the Trump Federal Trade Commission (FTC) and Department of Justice’s Antitrust Division. With that as backdrop, another major effort at wireless consolidation has emerged with Verizon's announced purchase of Tracfone, one of the biggest prepaid vendors in the U.S. The $6.2 billion deal will, Verizon insists, result in "exciting and compelling" products in the years to come: We are excited about the opportunity to bring @Tracfone and its brands into the Verizon family where we can put the full support of Verizon behind this business and provide exciting and compelling products into this attractive segment of the market. https://t.co/crbhXF6xHg pic.twitter.com/aX9VO50t6K — Hans Vestberg (@hansvestberg) September 14, 2020 Yes, if there's one word that American consumers have come to associate with major telecom mergers, it's "excitement." The problem here, of course, is that the direct result of mindless M&A in the U.S. telecom space couldn't be any more apparent. Less overall competitors means less effort to seriously compete on price. And the MVNO space had already been under relentless assault by companies like Verizon that have slowly but surely done their best to elbow out any smaller players that dare seriously compete on price with the major networks they must rely on to survive. With the postpaid market saturated, wireless players are now forced to eek out growth wherever possible. In this case, via acquisitions, followed by only a superficial continued dedication to prepaid wireless lower-priced offerings. As part of the Tracfone deal, Verizon not only nabs 21 million Tracfone customers, but the company's Net 10, Walmart FamilyMobile, SafeLink, Simple Mobile, Straight Talk Wireless, and Clearway prepaid brands as well. Fewer major networks means less incentive than ever to negotiate on rates, roaming, or much of anything else. With Sprint (the most friendly company to MVNOs by a wide margin) now out of the picture, things have gotten more treacherous for smaller MVNOs than ever. Of course, if the U.S. stays close to its historical norm, in about five years U.S. wireless data (pre and postpaid alike) will be significantly higher, and everybody will be left standing around with a dumb look on their collective faces wondering what went wrong.

Read More...
posted 2 days ago on techdirt
Everyone agrees elections should be secure. But hardly anyone in the federal government is doing anything useful about it. The shift to electronic voting has succumbed to regulatory capture which isn't doing anything to ensure the best and most secure products are being deployed. On top of that, it's become a partisan issue at times, resulting in legislators scoring political points rather than making voting and voters more secure. There may be some good news on the way, although it's unlikely to result in a more secure election in 2020. As Maggie Miller reports for The Hill, political differences have been stowed away for the moment to push an election security bill forward. The House on Wednesday unanimously passed bipartisan legislation intended to boost research into the security of election infrastructure. The Election Technology Research Act would establish and fund a Center of Excellence in Election Systems at the National Institute of Standards and Technology (NIST) to test the security and accessibility of voting equipment, along with authorizing NIST and the National Science Foundation to carry out research on further securing voting technology. The bill [PDF] made its debut last year, but hasn't gone anywhere since February 2020. Now, with an election right around the corner, the bill is finally moving again. This is still pretty last minute, though. The Senate still has to deliver its own version. And it appears to be in no hurry to do that. Earlier this year, the Senate majority blocked three election security bills, adding them to the pile of legislation Senate Majority Leader Mitch McConnell doesn't care for. Even with bipartisan support, one ranking House member thinks the bill just creates more problems. Rep. Rodney Davis (R-Ill.), the ranking member of the House Administration Committee, expressed reservations about the legislation on the House floor Wednesday, saying that his panel had not held a markup or hearing on the bill and noting concerns about the legislation potentially undermining work by the Election Assistance Commission. This may be a legitimate concern, but it could just be political posturing. Recent history shows the head of the EAC did more to undermine the EAC's work than any outside election security efforts. Brian Newby, the executive director of the Election Assistance Commission, has blocked important work on election security, micromanaged employees’ interactions with partners outside the agency and routinely ignored staff questions, according to former election officials, former federal employees and others who regularly work with the agency. Newby failed to secure the EAC votes needed to serve another term. He exited the EAC last September, leaving behind a legacy of not giving a damn about election security. The Election Assistance Commission has ceded its leadership role in providing security training, state and local officials say, forcing them to rely on the help of the U.S. Department of Homeland Security, which lacks the same level of experience in the issues confronting the country’s voting systems. [...] The election officials assert that the EAC’s executive director, Brian Newby, has blocked the travel of key staffers at the EAC who specialize in cybersecurity, preventing them from attending what training sessions have taken place. Given this, it's hard to imagine legislation that ropes in the NIST and NSF causing more problems for election security than the Election Assistance Commission has created itself. Even if this bill lands on the President's desk in time for this year's election, it won't make this one any more secure. The changes won't be implemented immediately and a report on current security measures and processes won't be provided to Congress for another 18 months. But it should make things better going forward, even if it will be off to a slow start. It finally adds actual researchers to the mix, which should hopefully keep this from becoming a political football every 2-4 years.

Read More...
posted 3 days ago on techdirt
As any internet platform matures, the growth it undergoes will inevitably lead to experimenting with revenue models. For a healthy chunk of the internet, advertising plays some role in those experiments. And, like anything else, there are good experiments and bad experiments. But I am very much struggling to understand who in the hell at Twitch thought that breaking away from live streams to force viewers to watch commercials, all without the control or input of Twitch streamers, could possibly be a good idea. “Beginning in September, as part of an ad experiment, some viewers may begin to notice that they are receiving ads during streams that others in a channel aren’t receiving,” the company wrote on its website. “Like pre-rolls, these are ads triggered by Twitch, not by the creator.” Crucially, these ads utilize Twitch’s “picture-by-picture” functionality, which basically means that the stream you’re watching pops out into a smaller window while the ad rolls in the main window. However, ads will still steal the show from some viewers, with streamers none the wiser as to who can hear what they’re saying (picture-by-picture mutes streams) and, therefore, understand what’s happening on stream while ads are playing. If this reads as though Twitch were trying to turn its platform into some flavor of broadcast television, where the content is broken away from in the service of displaying advertising, that's because that's exactly what this is. Which doesn't make any sense. Twitch is not television. Sure, some streamers choose to break away from their own content for advertising. In fact, doing so staves off this new process of forced breakaways. But many streamers don't do that. For a viewer to be torn away from the content that continues on, muted, all while they're forced to view ads, would be stupid on its own. To give streamers not only almost zero control over whether this happens, but also zero visibility into when and to whom it's happening, can only serve to piss everyone off. Which is exactly what it did. “You’re not YouTube,” said Twitch partner ThatBronzeGirl on Twitter in response to Twitch’s announcement. “When ads play in the middle of the stream, viewers actively miss out on content (muted or not). Add this to the fact that viewers are hit with an ad as soon as they enter a stream, so channel surfing is cumbersome. Idk why y’all hate viewer retention.” “This means either one of two things happens: 1) I schedule a break in the stream to have control over ads running that are proven to drive viewers away. 2) Viewers get an ad randomly that is all but guaranteed to drive them away. Which of those is for us though?” said variety streamer Deejay Knight. “If I don’t play enough ads, Jeff Bezos literally comes to my stream and pushes the ad button, what do I do,” said former Overwatch pro Seagull. Let's be clear, Twitch is a thing because of the talent that chooses to use it. It's bad enough to put a new advertising model in place that pisses off viewers. But piss the talent off and they'll simply go somewhere else, particularly when the viewers voice their frustration by removing their eyeballs. Some of this seems to also be Twitch not understanding that the platform is no longer video game let's-plays. The content is wide and varied and much of it cannot function with this sort of intrusive advertising. “A streamer could be talking about suicide prevention, and up pops an ad,” said Scottish Twitch partner Limmy. “Depending on the implementation, the streamer would either be unaware, which is bad, or the streamer has to announce a forced ad break at an inappropriate time.” “We’re not all Overwatch and Fortnite,” said dungeon master MontyGlu. “In narrative streams such as DnD live shows and RPG game streams, 10-30 seconds removed could completely deprive people of story, context and investment.” As the Kotaku post notes, part of the problem here is that all the monetary incentives for streamers compared with the platform are horribly misaligned. Many streamers make most of their money through subscriptions and brand partnerships. The money they get from Twitch is mostly an afterthought. Twitch, on the other hand, makes gobs of money from advertisements. It's a scenario in which the platform is incentivized by advertising while the talent is very specifically incentivized by a lack of advertising. More ads drive eyeballs away, which means less lucrative partnerships and subscriptions. If Twitch wants to push more ads, it desperately needs to get the streamers on board. “While I’m not allowed to say specifics, Twitch has the worst CPM ad-revenue share to creators with their standard contracts (read: not the big shots with custom negotiated rates),” said Minecraft YouTuber and Twitch streamer KurtJMac. “They want ads to run because they make bank. Pay a fair rate to creators and we’d be glad to run ads!” Somewhat amazingly, Twitch has stated that it isn't backing down. The experiment will run its course, the company said, and it will review the data afterwards. I simply can't imagine that said data will show that intrusive ads that everyone hates are good for the company.

Read More...
posted 3 days ago on techdirt
Cops lie. Cops lie enough there's a term for it: testilying. Honest prosecutors don't want lying cops on the stand dirtying up their case with their impeachable testimony. Unfortunately, police unions are powerful enough to thwart this small bit of accountability. "Brady lists" are compiled by prosecutors. They contain the names of officers whose track record for telling the truth is so terrible prosecutors don't want to have to rely on their... shall we say... misstatements in court. Unfortunately, these lists are often closely-guarded secrets. Judges aren't made aware of officers' penchant for lying. Neither are defendants in many cases. But they're called "Brady" lists because they're supposed to be disclosed to defendants. The "Brady" refers to Brady v. Maryland, where it was decided prosecutors are obligated to turn over possibly exculpatory information to defendants to ensure their right to a fair trial. This includes anything that might indicate the cop offering testimony might not be telling the truth. The Massachusetts Supreme Judicial Court has ruled [PDF] prosecutors have an obligation to inform defendants of officers who have made their "Brady" lists. Two cops who made false statements in a use-of-force report were granted immunity for their testimony in front of a grand jury. The district attorney prosecuting a different criminal case handed this information over to the defendant. The cops challenged this move, claiming their grand jury immunity should have prevented this exculpatory information from being given to the defendant and discussed in open court. (h/t Matthew Segal) The cops argued there's no constitutional duty to disclose this information (under the US Constitution or the Commonwealth's) unless failing to do so would alter the outcome of the trial by creating reasonable doubt where none previously existed. The Supreme Judicial Court says that argument is wrong under both Constitutions. First, prosecutors have more than a constitutional duty to disclose exculpatory information; they also have a broad duty under Mass. R. Crim. P. 14 (a)(1)(iii) to disclose "[a]ny facts of an exculpatory nature." This duty is not limited to information so important that its disclosure would create a reasonable doubt that otherwise would not exist; it includes all information that would "tend to" indicate that the defendant might not be guilty or "tend to" show that a lesser conviction or sentence would be appropriate. [...] Second, even if prosecutors had only their constitutional obligation to disclose, and not the broad duty under our rules, we would not want prosecutors to withhold exculpatory information if they thought they could do so without crossing the line into a violation of the defendant's right to a fair trial. The acceptable standard under the Constitution is not "see what you can get away with." This is an obligation, not a nicety to be deployed at the prosecutor's discretion. A prosecutor should not attempt to determine how much exculpatory information can be withheld without violating a defendant's right to a fair trial. Rather, once the information is determined to be exculpatory, it should be disclosed -- period. And where a prosecutor is uncertain whether information is exculpatory, the prosecutor should err on the side of caution and disclose it. In this case, the information was definitely of the possibly exculpatory variety. Lying cops who've admitted before a grand jury they falsified reports should definitely be considered impeachable witnesses. Whether or not the information is determined admissible at trial is beside the point. [T]he ultimate admissibility of the information is not determinative of the prosecutor's Brady obligation to disclose it. Where the information, as here, demonstrates that a potential police witness lied to conceal a fellow officer's unlawful use of excessive force or lied about a defendant's conduct and thereby allowed a false or inflated criminal charge to be prosecuted, disclosing such information may cause defense counsel, or his or her investigator, to probe more deeply into the prior statements and conduct of the officer to determine whether the officer might again have lied to conceal the misconduct of a fellow police officer or to fabricate or exaggerate the criminal conduct of the accused. The cops also argued their immunity from prosecution during their grand jury testimony should shield them from any adverse consequences. Wrong again, says the court. The immunity only covers prosecution for the admitted crimes. It is not a shield against reputational damage that may result from this information being made public or handed over to defendants. An immunized witness, like others who are not immunized, may prefer that the testimony not be disseminated by the prosecutor, especially if it would reveal the witness's dirty deeds, but that preference does not affect whether the information is exculpatory or whether it should be furnished to other defendants. Once disclosed, the immunized testimony may be used to impeach the immunized witness, provided that the testimony is not being used against the witness in a criminal or civil prosecution other than for perjury. In sum, a prosecutor's obligation to disclose exculpatory information is the same for immunized testimony as for all other testimony. There is no higher Brady standard applied for a prosecutor to disclose immunized testimony. The Court wraps this up by laying down the law: this is Brady info and it needs to be disclosed to defendants. The SJC is not fucking around. [W]e conclude, as did the district attorney, that the prosecutors here have a Brady obligation to disclose the exculpatory information at issue to unrelated criminal defendants in cases where a petitioner is a potential witness or prepared a report in the criminal investigation. That obligation remains even though that information was obtained in grand jury testimony compelled by an immunity order. And the district attorney may fulfill that obligation without prior judicial approval; a judge's order is needed only for issuance of a protective order limiting the dissemination of grand jury information. More broadly, we conclude that where a prosecutor determines from information in his or her possession that a police officer lied to conceal the unlawful use of excessive force, whether by him or herself or another officer, or lied about a defendant's conduct and thereby allowed a false or inflated criminal charge to be prosecuted, the prosecutor's obligation to disclose exculpatory information requires that the information be disclosed to defense counsel in any criminal case where the officer is a potential witness or prepared a report in the criminal investigation. That's the standard in Massachusetts. And bad cops are on notice there's pretty much nothing they can do to escape the consequences of their own actions. This is as it should be. Now, if the courts could just make sure prosecutors and police departments are actually compiling Brady lists, we'd be set. At least in this Commonwealth.

Read More...
posted 3 days ago on techdirt
While the TikTok part of Trump's original August Executive Order got all the attention, we pointed out that it was fairly notable that he issued a nearly identical order to also effectively ban WeChat by blocking any transactions related to WeChat. While WeChat is not that well known or widely used in the US, it is basically central to the Chinese internet, and, as such, is a key part of how many Chinese Americans stay in touch with relatives, friends, and colleagues back in China. So it was perhaps not that surprising that a group of WeChat users in the US quickly sued to try to block the order: Neither the Executive Order itself nor the White House provided concrete evidence to support the contention that using WeChat in the United States compromises national security. Notably, no other nation has implemented a comprehensive WeChat ban on the basis of any like-finding that WeChat is a threat to national security. The Executive Order was, however, issued in the midst of the 2020 election cycle, during a time when President Trump has made numerous anti-Chinese statements that have contributed to and incited racial animus against persons of Chinese descent—all outside of the national security context. In a stark violation of the First Amendment, the Executive Order targets and silences WeChat users, the overwhelming majority of whom are members of the Chinese and Chinese-speaking communities. It regulates constitutionally protected speech, expression, and association and is not narrowly tailored to restrict only that speech which presents national security risks to the United States. Accordingly, it is unconstitutionally overbroad. Indeed, banning the use of WeChat in the United States has the effect of foreclosing all meaningful access to social media for members of the Chinese-speaking community, such as Plaintiffs, who rely on it to communicate and interact with others like themselves. The ban on WeChat, because it substantially burdens the free exercise of religion, also violates the Religious Freedom Restoration Act. The Executive Order runs afoul of the Fifth Amendment’s Due Process Clause by failing to provide notice of the specific conduct that is prohibited; because of this uncertainty, WeChat users in the United States are justifiably fearful of using WeChat in any way and for any purpose—and also of losing access to WeChat. Since the Executive Order, numerous users, including Plaintiffs, have scrambled to seek alternatives without success. They are now afraid that by merely communicating with their families, they may violate the law and face sanctions. As the complaint highlights, just the issuing of the Executive Order has created panic among people who rely on it to communicate with people in China: The U.S. WeChat Users Alliance (“USWUA”), Chihuo, Inc., Brent Coulter, Fangyi Duan, Jinneng Bao, Elaine Peng, and Xiao Zhang (collectively, “Plaintiffs”), bring this suit to challenge the Executive Order, which eviscerates an irreplaceable cultural bridge that connects Plaintiffs to family members, friends, business partners, customers, religious community members, and other individuals with common interests within the Chinese diaspora, located both in and outside of the United States. The Executive Order has already harmed Plaintiffs, who are plagued with fear for the loss of their beloved connections, whether it be with friends, family, community, customers, aid recipients of the charities they run, or even strangers whose ideas enrich their lives. They have been forced to divert time, energy, and money to seek alternative communication platforms, download and save irreplaceable digital histories, plan for business closures, find other sources of information, and try to obtain alternative contact information for those from whom they will soon be separated. Even if they succeed to some extent in their mitigation efforts, Plaintiffs will never be able to replace the full spectrum of the social interactivity that WeChat offers, nor will they be able to find any social networking platform with anything close to the same level of participation by the global Chinese diaspora—this is because WeChat’s network effects, generated by its 1 billion-plus daily users, is irreplaceable The plaintiffs have also filed for a preliminary injunction against the Executive Order (which is set to go into effect on Sunday). There's a hearing on Thursday. So far, the plaintiffs have failed to get expedited discovery, as the judge notes that pretty much everything so far relies clearly on public information, and there's no need for discovery at this point -- not to mention there would be a pretty big argument over what things are actually subject to discovery anyway. The government's opposition to the injunction request is... weird? It basically starts out with a big attack on China that's just sort of priming the pump and hand-waving around the idea that if China is bad then it's self-evident that any app that comes out of China must also be bad. This part of the argument focuses on... companies that are not Tencent/WeChat, but instead does the fear mongering about Huawei and ZTE that (we've noted many times) has never presented any actual evidence of bad behavior by those companies. Also, Huawei and ZTE are not... Tencent. So then the DOJ just points to a random Australian think tank white paper that says Tencent/WeChat is also bad. They cite a few other such reports, but the "bad" seems to be that China heavily censors WeChat and... duh? But how does that mean it's dangerous in the US and should be banned? Incredibly -- given the frequency with which the President himself has retweeted conspiracy theories pushed by Russian troll accounts -- the DOJ actually argues that because some disinformation is found on WeChat, that's reason enough to ban it: The Report also observed that the WeChat app is a key tool for China’s disinformation campaigns, citing as one example Australia’s May 2019 election, in which “fake news on WeChat was such a problem that Australia’s Labor Party contacted WeChat owner Tencent to express frustration about posts spreading disinformation.” Id. at 406-07. The Report cautioned that “use of [WeChat] had spread beyond the Chinese Australian community, with about 3 million Australians using WeChat by 2017,” id., and that “almost the entire Mandarin-speaking community in Australia . . . used WeChat,” allowing “Beijing [to] ‘promote particular issues [as] a way of controlling public debate.’” I find it pretty fucking ironic that at the same time our government is using claims of "fake news" on a social media platform as an excuse to ban it, it is also trying to force American social media companies to no longer be able to moderate "fake news" and foreign propaganda. Where the government may have more success is by arguing that the claims are "not ripe" because the executive order hasn't been implemented yet. But, that's just kicking the can down the road. Because once it is implemented, the same basic claims will remain. It does argue that the 1st Amendment claim will fail because its content neutral. That is, the DOJ is saying "we're not targeting specific speech, we're just banning an app used for speech.... that happens to be used by lots of Chinese speaking people." I think that's... pretty weak. That's "we're not blocking the printing of your magazine, just ordering the destruction of your printing presses, which might be used to print any magazine." That's not allowed under the 1st Amendment, and it shouldn't be allowed under this order. Anyway, we should get a ruling at least on the preliminary injunction relatively quickly (given that it's slated to go into effect on Sunday). I hope the court does grant the injunction, but I'd be surprised if it does. It seems much more likely to punt based on ripeness for now. However, this case (unlike the TikTok cases, which may not matter if a deal is reached) could go on for quite a while, and could be pretty damn important in determining if the White House can just up and ban a foreign software application.

Read More...
posted 3 days ago on techdirt
The perennial make-PACER-free legislation has arrived. If you're not familiar with PACER, count yourself among the lucky ones. PACER performs an essential task: it provides electronic access to federal court dockets and documents. That's all it does and it barely does it. PACER charges taxpayers (who've already paid taxes to fund the federal court system) $0.10/page for EVERYTHING. Dockets? $0.10/page. (And that "page" is very loosely defined.) Every document is $0.10/page, as though the court system was running a copier and chewing up expensive toner. So is every search result page, even those that fail to find any responsive results. The user interface would barely have been considered "friendly" 30 years ago, never mind in the year of our lord two thousand twenty. Paying $0.10/page for everything while attempting to navigate an counterintuitive interface draped over something that looks like it's being hosted by Angelfire is no one's idea of a nostalgic good time. Legislation attempting to make PACER access free was initiated in 2018. And again in 2019. We're still paying for access, thanks to the inability of legislators to get these passed. Maybe this is the year it happens, what with a bunch of courtroom precedent being built up suggesting some illegal use of PACER fees by the US Courts system. We'll see. Here's what's on tap for this year's legislative session: Representatives Hank Johnson (D-Ga.) and Doug Collins (R-Ga.) are hoping to drastically change all of the above with their bipartisan reform effort, the Open Courts Act (OCA). The bill would make online access to federal court records free to the public. It also contains language that would effectively improve upon PACER’s current and wildly out-of-date search functionality, increase third-party accessibility to the entire system, and upgrade and maintain the database using modern data standards. This is a good bill. It aims for something more than just free access. (To be honest, that would at least offset the frustration of subjecting yourself to PACER's hideous charms in an attempt to talk it out of some filings.) Free access is a necessity. At this point, the presumed openness of the court still hides behind a paywall, separating citizens from courtroom documents under the naive theory that it's impossible to give something away if it costs money to produce. (And that assumption ignores the tax dollars already earmarked for running the court system.) This bill would also drag the PACER system (presumably kicking and screaming) into the future… or at least a much more recent past. The 1995-esque front end would be updated, along with all the other stuff that doesn't work well… which is pretty much everything. It would be a bit more future-proofed. The bill [PDF] demands transparent coding that will incorporate "non-proprietary, full text searchable, platform-independent" elements. This means documents will finally be searchable by the text they contain, rather than limited to locating documents by finding the right docket and going from there. And this will hopefully fix another problem with PACER: search issues baked into the system by jurisdiction divisions. Each federal court has its own login page and, while it's possible to search all jurisdictions, it's far more likely you'll be dimed to death by useless searches before you find what you need. But who's going to pay for this, I hear the US Courts system asking? Well, like any other FTP service, it will be mostly supported by whales. On its own terms, the OCA would take two to three years to modernize the overall CM/ECF so that all court documents are searchable, readily accessible and machine-readable regardless of an end user’s browser setup. During this period, so-called institutional “power users” would still be subject to PACER fees–if they charge over $25,000 annually. But not forever. After that, fees would vanish entirely. Will this be the bill that sticks? Maybe. Courts are finding the PACER system questionable -- not just the barrier it places between the public and court documents, but the uses of the fees as well, very little of which has actually been spent on improving PACER itself. If there's something almost everyone agrees with, it's that PACER sucks. Being asked to pay for the dubious privilege of using a barely working system is the insult piled on top of the $0.10/page injury.

Read More...
posted 3 days ago on techdirt
As was rumored late last week, the White House is, in fact, nominating Nathan Simington to the FCC, taking over the seat of of Mike O'Riely, whose nomination was withdrawn just days after O'Rielly expressed his strong support for the 1st Amendment and made it clear what he thought of idiots calling for the government to force websites to host content: The First Amendment protects us from limits on speech imposed by the government—not private actors—and we should all reject demands, in the name of the First Amendment, for private actors to curate or publish speech in a certain way. Like it or not, the First Amendment’s protections apply to corporate entities, especially when they engage in editorial decision making. I shudder to think of a day in which the Fairness Doctrine could be reincarnated for the Internet, especially at the ironic behest of so-called free speech “defenders.” It is time to stop allowing purveyors of First Amendment gibberish to claim they support more speech, when their actions make clear that they would actually curtail it through government action. These individuals demean and denigrate the values of our Constitution and must be held accountable for their doublespeak and dishonesty. This institution and its members have long been unwavering in defending the First Amendment, and it is the duty of each of us to continue to uphold this precious protection. While there are many things we've disagreed with O'Rielly about, on this one, we agree 100%. And, the thanks he gets is effectively being fired by the President... and then replaced with someone who appears to believe the exact opposite. Simington is apparently the guy who wrote the utterly nonsensical, blatantly unconstitutional Executive Order that President Trump signed after he got mad that Twitter placed two fact checking notices on his dangerous and misleading tweets. Note the situation here. Twitter (and the rest of the internet) is now being punished for providing more speech. This is, of course, what people like Simington like to claim they support. But when it comes down to reality, they seem to want to just force the internet to host the speech of their friends, and never to do anything such as present counterarguments. On top of that, they wish to force private companies to host speech they do not support and do not believe in. All of this is unconstitutional. Yet, now the author of this nonsense gets rewarded with a potential FCC Commissionership. It's not clear if the Senate would find the time to do confirmation hearings before the election, but there's a decent chance that now rather than there being just one (Hi, Brendan Carr) FCC Commissioner who relishes using the power of the FCC to punish companies he doesn't like, we'll have two FCC Commissioners who have abandoned all pretenses that the Republican FCC Commissioners support the 1st Amendment and favor a "light touch" regulatory regime. They seem to only favor that for the telcos so many FCC Commissioners end up going to work for after leaving the FCC. For internet companies? They seem to think the opposite. Considering Simington's direct role in writing the executive order, and then working at NTIA while it crafted the petition for the current FCC review of Section 230, you would think that, should he actually be approved by the Senate, he should at the very least recuse himself from this particular matter. But, given this particular administration and their unwillingness to actually obey the law and follow the rules when it comes to "owning the libs" or whatever their motivation is, it wouldn't surprise me to see him take part in any vote.

Read More...
posted 3 days ago on techdirt
The 2020 Adobe CC Essentials Course Bundle has 15 courses to help you learn the full gamut of Adobe products. You'll learn graphics, web development, video editing, photography, and more. Courses cover these products: Photoshop, Lightroom, Behance, Dreamweaver, Aduition, Premiere Rush, XD, Portfolio, Fonts, Stock, After Effects, Premiere Pro, InDesign, Illustrator, and Spark. It's on sale for $50. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...
posted 3 days ago on techdirt
If Attorney General Bill Barr is ever gifted with superlatives, the one that will stick will be "worst." After presiding over some civil liberties violations under Bush I, Barr has returned to AG work under Trump and seems dead set on making everyone forget his first reign of far-more-limited terror. Barr wants encryption backdoors, the end of Section 230 immunity, and law enforcement officers promoted to the rank of demigod. The public will be expected to absorb the collateral damage. Bill Barr does know how to deliver a good speech, whether he's preaching to the converted or, in this recent speech, preaching to some developing converts. Speaking to Hillsdale College students during their Constitution Day event, Barr said he's trying to build a kinder, gentler DOJ. In exercising our prosecutorial discretion, one area in which I think the Department of Justice has some work to do is recalibrating how we interpret criminal statutes. In recent years, the Justice Department has sometimes acted more like a trade association for federal prosecutors than the administrator of a fair system of justice based on clear and sensible legal rules. In case after case, we have advanced and defended hyper-aggressive extensions of the criminal law. This is wrong and we must stop doing it. [...] To be clear, what I am describing is not the Al Capone situation — where you have someone who committed countless crimes and you decide to prosecute him for only the clearest violation that carries a sufficient penalty. I am talking about taking vague statutory language and then applying it to a criminal target in a novel way that is, at a minimum, hardly the clear consequence of the statutory text. This is definitely something that could use improvement. The DOJ has engaged in plenty of bad-faith, overly-aggressive prosecutions. Almost anything involving the CFAA comes to mind. But Barr can't lead this reform. He doesn't even really want it. As he was delivering this speech about prosecutorial discretion, news broke detailing the contents of a phone call Barr had with DOJ prosecutors: Attorney General William Barr expressed frustration with some local and state prosecutors' handling of riot-related crimes, telling top Justice Department prosecutors that he wants them to be aggressive in bringing charges related to protest violence, including exploring using a rarely used sedition law, according to a person familiar with the matter. This isn't discretion. This is [checks Barr's Constitution Day speech] a "hyper-aggressive extension of criminal law," the "taking" of "vague statutory language and applying it to a criminal target in a novel way." Barr's not going to practice what he preached at Hillsdale College and he doesn't want his prosecutors engaging in restraint either. Proving sedition is difficult. That's why we haven't historically charged violent protesters with sedition. There are a bunch of other federal and local statutes that capably address acts of violence or vandalism. There's no reason federal prosecutors should start pretending violence or vandalism occuring during/adjacent to anti-police brutality protests is a conspiracy to overthrow the government or "oppose by force" federal laws and statutes. There has only been one successful sedition prosecution in the last 25 years. It seems unlikely using this law to ensure protest-related prosecutions are federal is going to work. But that's not all. Barr also wanted DOJ prosecutors to find some way to go after Seattle's mayor over her handling of protests in her city. Attorney General William Barr asked Justice Department prosecutors to explore charging Seattle Mayor Jenny Durkan (D) over a protest zone in the city, The New York Times reported Wednesday. Barr asked prosecutors in the department's civil rights division to explore charging Durkan during a call with prosecutors last week, the Times reported citing two people briefed on those discussions. Barr's nice words about dialing back aggressive prosecutions were aimed solely at DOJ prosecutors who have made the mistake of going after Trump or his underlings in the administration. Barr doesn't care about the victims of over-prosecution who don't have connections to the White House. Those people are still on their own and still subject to the whims of prosecutors who have been given free reign to interpret the law for maximum prosecutorial efficiency. Barr said the quiet part loud later in his Hillsdale speech: Rather than root out true crimes — while leaving ethically dubious conduct to the voters — our prosecutors have all too often inserted themselves into the political process based on the flimsiest of legal theories. We have seen this time and again, with prosecutors bringing ill-conceived charges against prominent political figures, or launching debilitating investigations that thrust the Justice Department into the middle of the political process and preempt the ability of the people to decide. On one hand, this is a sickening display of sycophancy. On the other hand, it will save the taxpayers some money. No sense wasting time prosecuting someone Trump's just going to pardon. Barr's day of awfulness finally came to end with this unbelievably hot take in response to a student's question about COVID-19 lockdowns. There's no way to really brace yourself for his response: "You know, putting a national lockdown, stay at home orders, is like house arrest. Other than slavery, which was a different kind of restraint, this is the greatest intrusion on civil liberties in American history," Barr said as a round of applause came from the crowd. The Greatest Intrusion. Well. OK then. Uh, let’s see: internment camps, literacy tests, segregation, no-fly lists, cointelpro, TALON database, NSA warrantless wiretaps Bill Barr approved and oversaw one of the most legally dubious dragnet surveillance programs ever known, spying on billions of US telephone calls https://t.co/kBB9IlNf9Z — Dell Cameron (@dellcam) September 17, 2020 Bill Barr can no longer be satirized. He'd be an unsubtle farce capable of gathering only the cheapest laughs if he wasn't actually in charge of the goddamn Department of Justice. This makes him frightening, rather than pitiable.

Read More...
posted 3 days ago on techdirt
AT&T is telling Reuters that it's considering offering wireless customers a "$5 to $10 reduction in their bill" in exchange for some targeted ads: "I believe there’s a segment of our customer base where given a choice, they would take some load of advertising for a $5 or $10 reduction in their mobile bill,” Stankey said. Various companies including Amazon.com Inc, Virgin Mobile USA and Sprint’s Boost Mobile have tested advertising supported phone services since the early 2000s but they have not caught on. AT&T is hoping that better advertising targeting could revive the idea." Doling out discounts in exchange for ads doesn't sound like a bad idea on its face. The problem is that's not quite what AT&T is planning. AT&T's goal here is to create a paradigm where people willing to be tracked and hammered with behavioral ads will pay less than those who want to have their privacy respected. In recent years, AT&T has made it very clear the company wants a paradigm whereby opting out of snoopvertising and tracking will cost you more, effectively making privacy a luxury line item (not great for a country already in a broadband affordability crisis). AT&T already tried some variation of this idea once, and it wasn't just "discounts for ads." The company spent several years charging its broadband subscribers up to $500 more (!) per year to opt out of its snoopvertising systems. The kicker: it only opted you out of receiving behavioral ads, not out of being tracked. This was then passed off to consumers and the press as some kind of discount, when again it was simply making privacy (more accurately the illusion of privacy) only possible with an additional charge. The other problem, of course, is that this is AT&T. A government-pampered telecom monopoly with a very long history of talking a lot about innovation, then inevitably falling flat on its face once it actually attempts it. It's also a company with a very long history of cozying up to the NSA, repeatedly violating consumer privacy, and undermining absolutely any effort whatsoever to craft even modestly serious privacy guidelines. It's been particularly opposed to any privacy guidelines that would prohibit companies charging a surcharge for privacy protection. This is all fairly important context Reuters' scoop oddly fails to mention. AT&T's new pivot to ad-sponsored plans, which is still a year or two out, involves hoovering up an awful lot of location, viewing, and other data from the company's wireless, broadband, phone, and TV customers. AT&T's been a little slow to capitalize on all this data due to a heavy debt load, executive dysfunction, and an investor revolt, but the scope of what they're building from a consumer tracking perspective should be fully understood: "AT&T engineers are creating “unified customer identifiers,” Stankey said. Such technology would allow marketers to identify users across multiple devices and serve them relevant advertising. The ability to fine tune ad targeting would allow AT&T to sell ads at higher rates, he said. AT&T has invested in developing targeted advertising on its own media properties using data from its phone, TV and internet customers, but the company has been “slower in coming up the curve” on expanding its marketplace that allows advertisers to use AT&T data to target other media companies’ audiences, Stankey said." AT&T policy folks and lobbyists have (with the GOP's help) managed to convince a big chunk of DC and tech policy Twitter that when we talk about privacy, monopolization, and the health of the internet that "big tech" is the root of all evil. As a result we're launching a slew of "antitrust inquiries" into "big tech," while effectively gutting all meaningful oversight of telecom giants that have the same ad and consumer tracking ambitions but access to as much if not more data than the biggest Silicon Valley giants. I'm sure that kind of accountability vacuum and wholly asymmetrical tech policy won't be a problem down the road though, right?

Read More...
posted 3 days ago on techdirt
The UK government is fine with press freedom as long as the press confines itself to the unwritten guidelines the government uses to restrict it. Publish too many leaked documents? Well, the government will show up and destroy your computer equipment. Report on the wrong stuff? The government will kick you out of Parliament and tell you not to talk about why you've been kicked out. Publish names of people targeted by UK government investigations in the Land of the First Amendment and across the pond from the UK? Expect a UK court to issue a ruling telling you to abide by laws that don't govern the country you're actually publishing in. The UK government is again stepping on free press toes. And human rights organizations have noticed. Independent journalism outfit Declassified UK was recently told its journalistic services were no longer required… or would at least no longer be respected by the Ministry of Defence. The UK government has been formally warned for threatening press freedom after it blacklisted a group of investigative journalists and denied them access to information. The Council of Europe issued the Level 2 "media freedom alert" after Ministry of Defence press officers refused to deal with Declassified UK, a website focusing on foreign and defence policy stories. As the Independent reports, this aligns the UK government with Russia and Turkey, which received similar alerts recently for, respectively, beating and jailing journalists critical of their governments. Here's the chain of events that led to the Level 2 alert, as reported by Marcela Kunova of journalism.co.uk. On 25 August, Declassify UK journalist Phil Miller contacted the MoD’s press office to request a comment about the arrest of Ahmed al Babati, a serving soldier, near Downing Street for protesting the United Kingdom’s involvement in Saudi Arabia’s bombing of Yemen. Miller was promised information at first but the press office later called him to enquire about the publication’s editorial coverage of the conflict. "What sort of angle have you taken on the war in Yemen?” the MoD spokesperson asked. [...] Not long after, Miller received an email telling him that the MoD was not going to send him anything that day, but that he should "submit an FOI [Freedom of Information request] for anything that you require". [...] When Miller enquired with his contact at the press office, he was told: "My understanding from the office is that we no longer deal with your publication." Declassified UK feels this blacklisting is the result of its earlier reporting on questionable Ministry of Defence activities, like training Saudi pilots who were involved with bombings of civilians in Yemen. It's not just the Council of Europe that's noticed the UK government's decision to refuse to respond to journalists it apparently doesn't care for. The International Press Institute has sent a letter to MoD officials criticizing the agency for its actions. It goes without saying that the exclusion of a media publication by a government ministry due to its investigative reporting would undermine press freedom and set a worrying precedent for other journalists whose job it is to report in the public interest on the British military. Criticism should be no reason to discriminate against a media publication. In contrast, tough journalism by outlets such as Declassified UK on matters such as the UK’s foreign and military affairs, uncomfortable though it often may be for those in power, is crucial for a transparent and functioning democracy. The letter also asks for "clarification" on the decision by the MoD's press office. Presumably, no explanation will be provided. If anything, the MoD will just go back to handing out "no comments" to Declassified UK, rather than call any more attention to itself by cutting the independent journalists out of the minimal info loop. But, for now, the MoD has aligned itself with Russia and Turkey. It may not be demanding the jailing/beating of critics (at least, not out loud), but it's shown it's unwilling to handle criticism like a free world government agency.

Read More...
posted 4 days ago on techdirt
It has been a long and largely fruitless road for Origin, EA's PC gaming client that it had planned on building into a rival of Valve's Steam. What was originally supposed to have been the chief antagonist to Steam in the ongoing PC gaming platform wars instead is best described as a failure to launch. Released in 2011, Origin began life as it lived in total: the walled garden for most EA games. Critics appeared almost immediately, stemming from odious requirements to relinquish personal information, the use of DRM, and security flaws. Couple that with a game library that was relatively stilted compared with Steam, by design mind you, and it's not difficult to understand why the adoption numbers for the game client just never took off. Several weeks ago, to the surprise of many, EA suddenly released its gaming catalog on Steam. Given the long history of the company keeping its toys for itself, it left many scratching their heads in confusion. This week, the inevitable occurred, with EA announcing that Origin will be no more. Instead, the PC gaming client will rebrand, rebuild, and become an optional place for EA gamers to play, rather than a Fort Knox for EA games. EA has yet another piece of interconnected news to share: it's rebranding its Origin desktop app to simply be called the EA desktop app, alongside giving its PC platform a visual refresh. Speaking to GamesIndustry.biz, EA SVP, strategic growth Mike Blank says the overhaul is intended "to create a more frictionless, fast, socially-oriented experience for our players, where it becomes the best place for them to connect with the people they want to play with in the games they want to play." I'm frankly not used to giving EA a ton of kudos in these pages, but the overall strategy is a good one. The company appears to have finally realized that being permissive with gamers that just want to play the company's games is better business than trying to lock them into a failed client few want to use. The revamping of the UX was long needed, too, but the real star of the show here is that EA is looking to be more open in general. "All of that is signaled by creating a common and consistent brand that is centered around EA and what EA stands for," Blank says. "And what signals it is this inflection about how EA stands for bringing your players together around the games they want to play on the platforms they want to play on. So yeah, it's not just a name change. It really signals an ethos that is critically important to us and that we know that's important to our players. It's been a long journey for EA in this regard to where our games show up and where they don't. One of the things that we value is democratizing gaming, which is: how do you enable more people to play? And how do you make it easy for them to do so? And by bringing our games to Steam, we are doing just that. So whether we were there in the past or not, I look towards the future. And what I think today is that we are stronger and healthier. And I think we're responding more effectively to the needs of our players today than we ever have, and Steam is part of that journey." Again, this is EA we're talking about, so it's going to take more than just the right words to convince most of us that this truly is a new direction for the company. Still, these are the right words. EA has long built a reputation for itself as being anti-consumer in many ways, but all of those ways come down to one thing: control. For a company with that history to suddenly start giving up that control, not out of surrender but out of a belief that it's good business, is a positive step.

Read More...
posted 4 days ago on techdirt
Summary: Late in June 2020, a leak-focused group known as "Distributed Denial of Secrets" (a.k.a., "DDoSecrets") published a large collection of law enforcement documents apparently obtained by the hacking collective Anonymous. The DDoSecrets' data dump was timely, released as protests over the killing of a Black man by a white police officer continued around the nation neared their second consecutive month. Links to the files hosted at DDoSecrets' website spread quickly across Twitter, identified by the hashtag #BlueLeaks. The 269-gigabyte trove of law enforcement data, emails, and other documents was taken from Netsential, which confirmed a security breach had led to the exfiltration of these files. The exfiltration was further acknowledged by the National Fusion Center Association, which told affected government agencies the stash included personally identifiable information. While this trove of data proved useful to activists and others seeking uncensored information about police activities, some expressed concern the personal info could be used to identify undercover officers or jeopardize ongoing investigations. The first response from Twitter was to mark links to the DDoSecret files as potentially harmful to users. Users clicking on links to the data were told it might be unsafe to continue. The warning suggested the site might steal passwords, install malicious software, or harvest personal data. The final item on the list in the warning was a more accurate representation of the link destination: it said the link led to content that violated Twitter's terms of service. Twitter's terms of service forbid users from "distributing" hacked content. This ban includes links to other sites hosting hacked content, as well as screenshots of forbidden content residing elsewhere on the web. Shortly after the initial publication of the document trove, Twitter went further. It permanently banned DDoSecrets' Twitter account over its tweets about the hacked data. It also began removing tweets from other accounts that linked to the site. Decisions to be made by Twitter: Should the policy against the posting of hacked material be as strictly enforced when the hacked content is potentially of public interest?Should Twitter have different rules for “journalists” or “journalism organizations” with regards to the distribution of information?How should Twitter distinguish “hacked” information from “leaked” information?Should all hacked content be treated as a violation of site terms, even if it does not contain personal info and/or trade secrets?How should Twitter handle mirrors of such content?How should Twitter deal with the scenario in which someone links to the materials because of their newsworthiness, without even knowing the material was hacked? Questions and policy implications to consider: Does a strict policy against "distributing" hacked content negatively affect Twitter's value as a source of breaking news?Does the mirroring of hacked content significantly increase the difficulty and cost of moderation efforts? Resolution: While DDoSecrets' site remains up and running, its Twitter account does not. The permanent suspension of the account and additional moderation efforts have limited the spread of URLs linking to the apparently illicitly-obtained documents.

Read More...
posted 4 days ago on techdirt
Late last year, we designed Threatcast 2020: a brainstorming game for groups of people trying to predict the new, innovative, and worrying forms of misinformation and disinformation that might come into play in the upcoming election. We ran a few in-person sessions before the pandemic hit and ended our plans for more, then last month we moved it online with the help of the fun interactive event platform Remo. We've learned a lot and hit on some disturbingly real-feeling predictions throughout these events, so this week we're joined by our partner in designing the game — Randy Lubin of Leveraged Play — to discuss our experiences "threatcasting" the 2020 election. We really want to run more of these online events for new groups, so if that's something you or your organization might be interested in, please get in touch! Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Read More...
posted 4 days ago on techdirt
When we launched Techdirt Greenhouse, we noted that we wanted to build a tech policy forum that not only tackled the thorniest tech policy issues of the day, but did so with a little more patience and nuance than you'll find at many gadget-obsessed technology outlets. After our inaugural panel tackled privacy, we just wrapped on our second panel subject: content moderation. We'd like to thank all of those that participated in the panel, and all of you for reading. You'd be hard pressed to find a thornier, more complicated subject than content moderation. On one hand, technology giants have spent years prioritizing ad engagement over protecting their user base from malicious disinformation and hate speech, often with fatal results. At the same time, many of the remedies being proposed cause more harm than good by trampling free speech, or putting giant corporations into the position of arbiters of acceptable public discourse. Moderation at this scale is a nightmare. One misstep in federal policy and you've created an ocean of new problems. Whether it's the detection and deletion of live-streaming violence, or protecting elections from foreign and domestic propaganda, it's a labyrinthine, multi-tendriled subject that can flummox even experts in the field. We're hopeful that this collection of pieces helped inform the debate in a way that simplified some of these immensely complicated issues. Here's a recap of the pieces from this round in case you missed them: Michael Karanicolas examined how localized content moderation decisions can have a massive, often unpredictable global impact, as disinformation-fueled genocide makes abundantly clear. Robert Hamilton explored the need to revisit the common law liability of online intermediaries before Section 230, helping us better understand how we got here. Jess Miers explored how getting rid of Section 230 won't magically eliminate the internet's most problematic content. Aye Min Thant took a closer look at how conflating Facebook with "the internet" in locations like Myanmar, without understanding the culture or having adequate safeguards in place, threw accelerant on the region's genocide. Matthew Feeney examined how evidence "supporting" the repeal of Section 230 is shaky at best, and the fixation on Section 230 is hugely myopic. John Bergmayer argued that it doesn't make sense to treat ad the same as user-generated content, and that websites should face the legal risk for ads they run as print publishers. Brandi Collins-Dexter explored how the monetization of polarization has had a heartbreaking impact on America's deep, longstanding relationship with bigotry. Emma Llanso discussed how the sharing of content moderation knowledge shouldn't provide a backdoor to cross-platform censorship. David Morar explored how many of the problems currently being blamed on "big tech," are simple, ordinary, human fallibility. Yosef Getachew examined how social media could easily apply many of the content moderation practices they've custom-built for COVID-19 to the battle to protect election integrity from domestic and foreign disinformation. Adelin Cai and Clara Tsao offered a useful primer for trust and safety professionals tasked with tackling the near-impossible task of modern content moderation at scale. Gaurav Laroia & Carmen Scurato discussed how fighting online hate speech requires keeping Section 230, not discarding it. Taylor Rhyne offered a useful content moderation primer for startups facing a daunting challenge without the bottomless budgets of their "big tech" counterparts. Graham Smith took a closer look at the content moderation debate and how it intersects with existing post-Brexit headaches in the UK. Daphne Keller took a deep dive into what policy makers can do if they don't like existing platform free speech rules, and how none of the options are particularly great. Much like the privacy debate, crafting meaningful content moderation guidelines and rules (and ensuring consistent, transparent enforcement) was a steep uphill climb even during the best of times. Now the effort will share fractured attention spans and resources with an historic pandemic, recovering from the resulting economic collapse, and addressing the endless web of socioeconomic and political dysfunction that is the American COVID-19 crisis. But, much like the privacy debate, it's an essential discussion to have all the same, and we hope folks found this collection informative. Again, we'd like to thank our participants for taking the time to provide insight during an increasingly challenging time. We'd also like to thank Techdirt readers and commenters for participating. In a few weeks we'll be announcing the next panel; one that should prove timely during an historic health crisis that has forced the majority of Americans to work, play, innovate, and learn from the confines of home.

Read More...
posted 4 days ago on techdirt
There have been a variety of lawsuits filed regarding Trump's silly Executive Order regarding TikTok, but one interesting one involves an employee of TikTok, Patrick Ryan, who filed suit on his own behalf to try to block the Executive Order from going into effect. A key part of Ryan's argument is that since the executive order bans transactions, it would mean his own salary from TikTok's parent company, ByteDance, might be blocked by the US government. It is impossible to know now whether the Commerce Department will exempt the payment of wages and salaries from the dictates of the Executive Order, and Plaintiff will not know until the day the order is to take effect, but any plain reading of the language of the order would include the payment of wages and salaries to U.S. employees of TikTok within that definition As such, Ryan asked the court to issue a Temporary Restraining Order to block the Executive Order from actually going into effect on September 20th. There's more to the lawsuit than that, but the DOJ responded to say "we won't block employee salaries." The Department of Commerce can state that it does not intend to implement or enforce Executive Order 13942 in a manner which would prohibit the payment of wages and/or salaries to Plaintiff or any other employee or contractor of TikTok. The Department of Commerce can state that it does not intend to implement or enforce Executive Order 13942 in a manner which would prohibit the provision of benefits packages to Plaintiff or any other employee of TikTok. The Department of Commerce can state that it does not intend to implement or enforce Executive Order 13942 in a manner which would result in the imputation of civil or criminal liability to Plaintiff or any other employee or contractor of TikTok for performing otherwise lawful actions that are part of their regular job duties and responsibilities. That caused Ryan's lawyers to declare at least an initial victory: This morning, the Government advised Plaintiff’s counsel and later the Court that it in fact will not apply the Executive Order to the payment of TikTok wages, salaries or benefits, or impose civil or criminal sanctions against them for doing their jobs, thereby mooting the need to seek a temporary restraining order against the Government to protect the TikTok employees. We are pleased that our litigation was able to achieve this fantastic result for the thousands of TikTok employees around the world, and we are confident that the remaining issues in this case also will be litigated fully to a successful conclusion, which will be the striking of the Executive Order as a unconstitutional overreach by this U.S. President. However, it also made it easy for the judge to then deny the requested TRO: Ryan’s application for a temporary restraining order is denied for two related reasons. First, there is a serious question about whether this Court has jurisdiction to issue a temporary restraining order at this point in time. It seems unlikely that the conflict between Ryan and the federal government has ripened into a true “case or controversy” within the meaning of Article III of the United States Constitution. Babbitt v. United Farm Workers National Union, 442 U.S. 289, 297 (1979). Whether Ryan could actually face prosecution for getting a paycheck from TikTok depends on a number of uncertain conditions. As a foundational matter, the President may only exercise his emergency powers to block transactions with a foreign-owned entity. ByteDance is widely reported to be in negotiations to alter its ownership structure in a manner that could result in non-enforcement of the Executive Order. Even if that fails, the Secretary of Commerce would need to include payments to employees on the list of prohibited transactions. And then there would need to be real risk that federal government would actually start prosecuting TikTok employees for receiving paychecks. That is an unlikely chain of events—indeed, yesterday the government filed a notice in this case specifying that the Department of Commerce “does not intend to implement or enforce [the Executive Order] in a manner which would prohibit the payment of wages and/or salaries to Plaintiff or any other employee or contractor of TikTok.” It is thus doubtful—at least at this time—that Ryan’s alleged fear that he faces prosecution is reasonable. The second reason for denying the temporary restraining order is that, even if the Court presently has jurisdiction, Ryan has not demonstrated that he is likely to suffer irreparable harm absent an immediate ruling. His vague allegation that he would suffer reputational harm from the government’s implementation of the Executive Order against TikTok certainly does not suffice. Ulrich v. City and County of San Francisco, 308 F.3d 968, 982 (9th Cir. 2002) (citing Paul v. Davis, 424 U.S. 693, 701, 711 (1976)). And to the extent Ryan seeks to protect a future paycheck (or to protect against prosecution for receiving money that TikTok owes him for work performed), that protection could be readily provided at a later date, if and when the possibility of losing it becomes more concrete. Of course, many of these cases may be moot, should the Treasury Department decide that the weird non-sale to Oracle solves any "problems" for TikTok. Of course, there's still another lawsuit from a bunch of WeChat users about the Executive Order, and since there's no attempt to sell WeChat... that case may have a longer lifespan, but we'll cover that in another post (stay tuned).

Read More...
posted 4 days ago on techdirt
The Interactive Learn to Code Bundle has 9 courses designed to help you learn to code and to write programs. The courses cover SQL, JavaScript, jQuery, PHP, Python, Bootstrap, Java, and web design. Each concept is explained in-depth, and uses simple tasks to help you cement your newly gained knowledge with some hands-on experience. It's on sale for $30. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...