posted about 4 hours ago on techdirt
Sometimes it's the things you don't do that can hurt you. The Sixth Circuit Court of Appeals has handed out a reminder to law enforcement officers that standing around while rights are violated can leave you just as liable as if you'd violated those rights yourself. The allegations behind the lawsuit and this rare denial of qualified immunity are horrifying. Being jailed is never pleasant, but the deputies involved in this case went out of their way to ensure this booking was particularly degrading. Keep in mind this was nothing more than an arrest for drunk driving. From the decision [PDF]: Fazica had been wearing the jumpsuit that she had been issued at the Bloomfield Township jail with the arms tied around her waist and no underpants. She also wore a bra and shirt. Id. at 43, 46 (Page ID #181, 184). Once the officers brought her into the room, they placed her face down on her stomach in a prone position on the floor, still wearing the spit hood. Id. at 47 (Page ID #185). She was “freaking out” and asking “what are you guys doing,” but she was not physically resisting. Id. at 45–46 (Page ID #183–84). One officer “pushed her face down” and an officer “said everyone gets stripped search [sic], just shut up.” Id. at 45 (Page ID #183). Fazica does not recall how the officers got her shirt off. An officer ripped her pants off from behind—literally tearing them apart. Id. at 46 (Page ID #184). One officer then “had [her] butt cheeks spread apart and there was [sic] hands like he was feeling for something.” Id. He placed his hands on her genitals. Id. at 71 (Page ID #209). Another officer put his hands up the front of Fazica’s bra and felt her nipples; Fazica felt his hands shaking as he did it. The officer who felt Fazica’s breasts “asked what the clips were, and the gentleman behind [her] said they were the clips to [her] bra, don’t worry about it.” Id. at 47 (Page ID #185). The same officer who had his hand on her breasts called her a bitch and one officer “kind of slap punched [her] when [she] was in the strip search room because he was mad because [she] was hysterical.” Id. at 48, 52 (Page ID #186, 190). The officers did not remove her bra. Id. at 47 (Page ID #185). She could not hear any female staff in the room and believes that no other females were present during the strip search. Id. Fazica knew that the officers who were strip searching her and who were present in the room were male because of their voices and their hands. Id. at 48 (Page ID #186). After this sexual assault by jailers -- which is apparently part of the "normal" booking process (according to the deputies' testimony) -- officers took her to a cell. The plaintiff, Renee Fazica, was wearing nothing more than her bra and the spit hood the jailers has placed on her. The booking report does not contain any of these details. As the court notes, the narrative in the booking report wasn't written until nearly a month after Fazica was jailed. The official version of the arrest cleans everything up for public consumption. The only benefit it provided was giving Fazica the names of the jailers she couldn't see. Booking received a call that Bloomfield Township was bringing in a new arrest, Inmate Fazica . . . and that she is intoxicated, yelling and spitting. . . . Sgt. Nicotri was notified. Supervisor Jordan was lead taser, Dep. Tucker was lead, Dep. Cordova and [Rodriguez] were wings and Supervisor Fletcher was four man [sic]. . . . Inmate Fazica was yelling as the door to the patrol car was opened. Dep. Tucker gained control of Inmate Fazica and with the assistance of Dep. Cordova and myself she was removed from the car. Dep. Tucker gained control of her head, Dep. Cordova and [Rodriguez] took control of her arms. A spit hood was then placed over her head. A pat down was then conducted for the safety and security of the Main Jail. Inmate Fazica was then escorted to the Annex and taken into Cell 1E-4. Inmate was told to lay down on the floor and she complied. Inmate was then searched. The handcuffs were then removed. Inmate Fazica was ordered to stay on the floor until the cell door was closed. All team members then left the cell without further incident. Nurse Thorpe then medically cleared inmate Fazica of any injuries. Event entered into IMACS. When sued for a variety of rights violations, all officers involved claimed to have no memory of the incident. No one remembered assaulting a female arrestee, much less participating in the extremely mild version of events recorded a month after Fazica was booked. The lower court denied qualified immunity to four of the named officers because there was still an open question as to which officers were involved. Since Fazica's view was obstructed by the spit hood, she understandably was unable to specifically allege which officer performed which violation. The officers appealed, arguing that because Fazica couldn't see who did what, all officers should be granted immunity. The court disagrees. Defendants argue that because Fazica cannot clearly attribute particular uses of force to particular Defendants, she cannot prove that any particular Defendant’s conduct violated her constitutional rights. Def. Br. at 19–20. For example, they argue that she cannot prove whether it was Defendant Officer Fletcher, Cordova, Tucker, or Jordan who was the one to twist her arm behind her back, rip her pants off, touch her genitals, etc., and that therefore she must lose at summary judgment. We reject Defendants’ argument and conclude that a reasonable jury could find that each of the named Defendants violated Fazica’s clearly established constitutional rights either by directly using excessive force against her or by observing others doing so and failing to act. That point is settled case law, as the court explains. Rights are not just violated by actions. They are also violated by inaction. Government employees who stand idly by as rights are violated can be held accountable for not intervening. Whether directly participating or not, all government employees are supposed to help safeguard Constitutional rights. That means stepping up when someone else crosses the line, not just hanging back and hoping the eventual plaintiff doesn't name you as a defendant. In this case, the misapplication (whether deliberate or not) of the spit hood prevented Fazica from identifying the officers involved in the strip search/sexual assault. The defendants argued precedential cases involved intentional efforts made by officers to obscure their identities. Wrong again, says the court: Defendants argue that the only reason that the court might deny qualified immunity in a case in which the plaintiff is not able conclusively to identify which officer committed which potentially unconstitutional act is “to avoid rewarding defendants who intentionally conceal their identities.” Def. Br. at 11. Certainly, disincentivizing officers from obscuring their identities so that they may use excessive force without consequences is a valid concern. See Burley I, 729 F.3d at 622. However, it is not the only concern. Plaintiffs who are unable to pinpoint precisely which named defendant did what, even where the defendants did not intentionally conceal their identities, still have an interest in the vindication of their constitutional rights. Section 1983 claims do not only incentivize officers’ good behavior; they also compensate and achieve justice for victims. More explicitly: [T]he obviousness of some of the acts Fazica recounts support the conclusion that the Defendants noticed the conduct and failed to intervene to stop it. Fazica stated that her pants were physically torn off her body before her genitals and breasts were groped, and the officers testified that strip searches do not usually involve physical contact with the inmate’s body. A jury could reasonably conclude that when an officer commits such acts, his colleagues are likely to notice. This doesn't mean Fazica has won or is likely to when her case returns to the lower court. What it does mean is the accused officers won't be shielded from this lawsuit and will have to actually defend themselves against her allegations. Most importantly, it's reiterated and on the record that standing by while rights are violated is no better than violating rights yourself. Permalink | Comments | Email This Story

Read More...
posted about 9 hours ago on techdirt
We've entered something of a moral panic, or at least an impressive uptick in public awareness, around the concept of deep fakes. These videos, edited and manipulated through technology, have managed everything from making the Speaker of the House appear drunk to putting caricature-like words in the mouth of Facebook's Mark Zuckerberg. On the topic of Facebook, it's been somewhat interesting to watch various internet sites deviate on exactly how to approach these deep fakes once they are reported. Facebook kept up the Pelosi video and, to its credit, the Zuckerberg video, but added some text to alert viewers that it was faked. Other sites, such as YouTube, have chosen to take certain deep fake videos down. One of those, as occurred recently, was a deep fake of Kim Kardashian that altered an interview given to Vogue Magazine, such that she appears to be discussing a conspiratorial group called Spectre and giving her own fans a hard time. It's all fairly parodic and not something that passes the most basic smell test. And, yet, as the discussion rages on as to how sites should respond and handle deep fakes, this particular video was taken down due to a copyright claim. The Kardashian deepfake, uploaded to YouTube on May 29 by anti-advertising activists Brandalism, was removed because of a copyright claim by publisher Condé Nast. The original video used to make the deepfake came from a video uploaded in April by the publisher’s Vogue magazine. “It certainly shows how the existing legal infrastructure could help,” Henry Ajder, head of communications and research analysis at Deeptrace, told Digital Trends. “But it seems to be available for the privileged few.” That should be the absolute least of anyone's concerns. In one of our previous posts on the topic of deep fakes, a tweet sent out by someone can be summarized as the entire real problem with taking down deep fakes generally and using copyright to do so even more specifically. homework assignment: draft the rule that prohibits doctored pelosi video but protects satire, political speech, dissent, humor etc. not so easy is it? https://t.co/zaA7kQf83i — David Kaye (@davidakaye) May 25, 2019 As hard as it is generally to come up with an answer to this homework assignment, it is all the more difficult to answer this question with copyright law. Copyright very specifically carves out space for all of the above to make room for fair use, which is why it so boggles the mind that YouTube agreed to take down this Kim Kardashian video in the first place. The entire point of this particular deep fake is far less malicious than the Pelosi video and seems to be completely geared toward humor and parody. Suggesting that moves like this are a problem because they're only available to the wealthy misses the point: moves like this aren't legally available to anyone at all, rich or otherwise. The Kardashian copyright claim has the potential to set a new precedent for when and how these kinds of videos are taken down, he added. It’s a tricky problem, since no one has decided if the manipulated videos fall into the category of fair use. Taking videos like these down open up giant tech companies to accusations that they’re impinging on freedom of expression. Yeah, exactly. As of this writing, the Kardashian deep fake remains taken down. That is plainly absurd. Meanwhile, YouTube isn't talking, and apparently nobody has slapped Conde Nast on the wrist yet, either. None of this is to say that the ability to create deep fakes isn't a problem, of course. But it sure as hell isn't a problem that can be easily solved by throwing copyright law at it. Permalink | Comments | Email This Story

Read More...
posted about 10 hours ago on techdirt
On Tuesday we did a deep dive into the whole kerfuffle over Genius claiming that Google was "scraping" its lyrics and explained why the whole story was a huge nothingburger. There are lots of reasons to be worried about Google, but this was not one of them. Among the many, many points in the article, we noted that Google had properly licensed the lyrics, that LyricFind admitted that it was the one responsible, that most publishers don't even know the lyrics they're licensing in the first place, and that basically everyone just copies them from everyone else. And, now, just to put a fine point on how this entire story in the Wall Street Journal (which has published multiple anti-Google editorials over the past few years) was concocted just to attack Google over something it hadn't done, a Wired article analyzing the situation notes that Microsoft's Bing and Amazon Music also display the identical lyrics that appear to have the "coded" or "watermarked" apostrophes that Genius put in place: One thing that some news stories have missed about Genius’ allegations is that Google is far from alone in surfacing lyrics that may have originated from Genius. Microsoft Bing and Amazon Music also appear to have Genius-watermarked lyrics. And Genius' response to this further evidence that it's not Google doing anything particularly nefarious? Genius would not comment on other sites’ apparent use of its transcripts. In other words "hey, don't mess with our narrative that big bad Google is the problem..." Again, there are plenty of reasons to be concerned about Google -- and we've covered many of them over the years. But a totally misleading and ginned up story that does not accurately portray the situation or the law is not helping anything but the outrage machine. Permalink | Comments | Email This Story

Read More...
posted about 12 hours ago on techdirt
Internet hellhole 8chan has been hit with a federal search warrant. The site, created to serve those who felt 4chan's nearly-nonexistent moderation was too restrictive, has been front and center recently due to its hosting of manifestos by mass shooters who apparently frequented the site. In this case, an investigation into a shooting at a California mosque has led the FBI to the pages of 8chan. Postings at the site -- along with some at Facebook -- have linked the shooter to the Christchurch shooting in New Zealand. According to the affidavit [PDF], the FBI believes the California mosque shooter was "inspired and/or educated" by the New Zealand's shooters manifesto and actions. The Poway shooter is already in custody, so the value of the information sought here is questionable. While the info may have some value in establishing the shooter's state of mind, as well as his connection to other crimes, the warrant does bear some resemblance to a fishing expedition. From the affidavit, it appears the feds have no shortage of evidence to use against the shooter: Using various search methods, Whitney Buckingham an SDSD system data miner, found a manifesto on Pastebin.com written by a person identifying himself as John Earnest. In the manifesto, which he named "An Open Letter", Earnest made many anti-Semitic and anti-muslim statements. One such statement which is a direct quote is, "As an individual, I can only kill so many Jews." He states he is not a terrorist but that he hates anyone who he sees as a threat to his country. Earnest took credit for a fire that had been set at mosque in Escondido a few weeks earlier. His exact statement was "I scorched a mosque in Escondido with gasoline a week after Brenton Tarrant's sacrifice and they never found shit on me. Additionally, he wrote "I spray-painted on the parking lot. I wrote 'For Brenton Tarrant -t./pol/." Tarrant is the New Zealand shooter Earnest apparently tried to emulate. Obviously, the threat of copycat killers is always a concern following mass shootings, but what the government is demanding here has the potential to sweep up dozens of users who did nothing but reply to threads involving the arrested shooter. Agents seek IP address and metadata information about Earnest's original posting and the postings of all of the individuals who responded to the subject posting and/or commented about it. Additionally, agents seek information about any other posting coming from the IP address used by Earnest to post the subject posting. This seems like a lot of people to be investigating for just being in the wrongest place on the internet at the wrong time. The justification for this is speculation that others who viewed the post will either become shooters themselves or somehow conspired with the shooter to carry out this horrible crime in which Earnest was the only shooter. As discussed above, Earnest made a posting in which he thought to draw attention to his forthcoming attack on the Chabad of Poway, share his views through his open letter, and offer people the opportunity to observe the attack itself. Several people responded, both individuals who were taken aback about the posting as well as people who were sympathizers. As a result, some of the individuals may be potential witnesses, co-conspirators and/ or individuals who are inspired by the subject posting. Based on agents' training and experience, following attacks such as those conducted by Earnest, other individuals are inspired by the attacks and may act of their own accord. By its own admission, the FBI is seeking information about posters "taken aback" by Earnest's post -- users unlikely to be "inspired" by the shooting or his co-conspirators. Apparently, the FBI doesn't trust 8chan to make that assessment, so it's asking for everything so it can sort through it and draw its own conclusions, engage in its own "non-custodial" interviews, subpoena a number of other service providers for more info, etc. In fact, the FBI would prefer Ch.net -- the host for 8chan -- just hand over everything demanded by the warrant without getting involved at all. In order to accomplish the objective of the search warrant with a minimum of interference with the business activities of Ch.net, to protect the rights of the subject of the investigation and to effectively pursue this investigation, authority is sought to allow Ch.net to make a digital copy of the entire contents of the accounts subject to seizure. However you may feel about 8chan and its denizens (and I hope those feelings are mostly negative), this is not a justifiable demand for information. The FBI wants everything on everyone in that thread, even as it states some of the users it's targeting were appalled by what they were seeing. This makes everyone in the thread a suspect and treats anonymous users of this site as inherently suspicious, no matter what their posts actually say. Permalink | Comments | Email This Story

Read More...
posted about 14 hours ago on techdirt
Senate newbie Josh Hawley has made it clear that he's no fan of big internet companies and has joined with others in suggesting that Section 230 is somehow to blame for whatever it is he dislikes (it mainly seems to be he thinks the public likes them too much). So now he's proposed a massively stupid and clearly unconstitutional bill, called the "Ending Support for Internet Censorship Act," to wipe out CDA 230 protections for large internet platforms. The proposal is shockingly dumb and so obviously unconstitutional it boggles the mind that Hawley is actually a constitutional lawyer. The bill is pretty straightforward, both in how it operates, and in how misguided it is. If you're a "big" internet platform -- defined as having more than 30 million "active monthly users" in the US or more than 300 million such users globally (or having over $500 million in revenue) -- then you automatically lose the protections of CDA 230. You can regain them by making a request to the FTC. In order to get them, you have to pay for an "audit" of your content moderation practices, and pro-actively "prove" via "clear and convincing evidence" that the practices are "politically neutral." Once the you do that, the FTC would "vote" on whether or not you could get CDA 230 protections, and they would only be granted with a "supermajority vote," which would mean at least four out of the five commissioners would have to vote for it. Since FTC Commissioners are always 3 to 2 in favor of the political party in the White House, that means any internet company that wants to get approval would need to get at least one commissioner of the non-Presidential party to vote for the immunity as well. There's no way this survives constitutional scrutiny (if it actually becomes law, which seems unlikely). The First Amendment pretty clearly says that Congress can't create a law that (1) forces a company to get approval for its moderation practices and (2) judges content on whether or not it's deemed "politically neutral." Also, what the hell does "politically neutral" even mean? It doesn't mean anything. And, as for "clear and convincing evidence," tons of people have pointed to clear and convincing evidence that these platforms don't moderate based on political viewpoints, and yet we still have tons of people insisting they do. Nothing is going to convince some people that the platforms are actively targeting conservatives, no matter how many times evidence to the contrary is presented. Hawley has set up a purposefully impossible standard. As we've pointed out, many people still insist that Twitter deciding to kick off literal Nazis is "evidence" of anti-conservative bias. As NetChoice points out, Hawley's bill would require sites to host KKK propaganda just in order to obtain basic liability protections. Is Josh Hawley truly arguing that any large website must cater to Nazis if it wants to allow public conversation? Because, damn, dude, that's a bold call. This is from the guy who claims to be a "Constitutional Conservative"? Really? His current bio hypes up that he's a "leading constitutional lawyer" and talks about how he was one of the lead attorneys in the Hobby Lobby case, which was (in part) defending a company's right to use the First Amendment to refuse to obey certain laws that violated the religious beliefs of their owners. So, apparently, in that case, it's bad for the government to enforce rules for private businesses -- but for other kinds of companies, the government should force them to moderate content in a particular way? I mean, is Hobby Lobby forced to be "politically neutral" in the products it sells in its shops? You'd expect Hawley to be at the front of the line screaming about how awful that would be. Can you imagine the stink that Hawley himself would put up if Congress attempted to force Hobby Lobby to be "politically neutral" in its own actions? Either way, this law is a non-starter, and once again shows that Hawley isn't legislating from any position of principle, but is grandstanding clearly unconstitutional ideas in the belief that self-identified "conservatives" hate the big internet companies these days, so any attack on them, no matter how dumb and unconstitutional, must be fine. As TechFreedom points out, this is little more than a fairness doctrine for the internet -- something conservatives have been against for decades. Incredibly, for all of the misguided and misleading complaints about how "net neutrality" was the "government takeover of the internet," Hawley's bill actually does a bunch of the things that opponents to net neutrality pretended net neutrality would do -- and yet, because it's politically expedient, you can likely bet that many of those who were against net neutrality will now support Hawley's ridiculous bill. Permalink | Comments | Email This Story

Read More...
posted about 14 hours ago on techdirt
Capable of reducing the size of soft goods by up to 70 percent, the Dr. Save Vacuum Pump removes air and compresses items placed inside the provided reusable bags, and is perfect for traveling. When you're not vacationing abroad, this pump is a dream for long term-storage, too, letting you pack away bulky winter coats or last summer’s beachwear with minimal hassle. It's on sale for $29. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted about 15 hours ago on techdirt
Everyone's got it out for Section 230 of the Communications Decency Act these days. And pretty much any excuse will do. The latest is that last week, Rep. Adam Schiff held a hearing on "deep fakes" with a part of the focus being on why we should "amend" (read: rip to shreds) Section 230 of the Communications Decency Act to "deal with" deep fakes. You can watch the whole hearing here, if you're into that kind of punishment: One of the speakers was law professor Danielle Citron, who has been a long time supporter of amending CDA 230 (though, at the very least, has been a lot more careful and thoughtful about her advocacy on that then many others who speak out against 230). And she recommended changing CDA 230 to deal with deep fakes by requiring platforms take responsibility with "reasonable" policies: Maryland Carey School of Law professor Danielle Keats Citron responded suggesting that Congress force platforms to judiciously moderate content in any changes to 230 in order to receive those immunities. “Federal immunity should be amended to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” Citron said. “The current interpretation of Section 230 leaves platforms with no incentive to address destructive deepfake content.” I have a lot of different concerns about this. First off, while everyone is out there fear mongering about the harm that deep fakes could do, it's not yet clear that the public can't figure out ways to adapt to this. Yes, you can paint lots of stories about how a deepfake could impact things, and I do think there's value in thinking through how that may play out in various situations (such as elections), to assume that deepfakes will absolutely fool people and therefore we need to paternalistically "protect" the public from possibly being fooled, seems a bit premature. That could change over time. But we haven't yet seen any evidence of any significant long term effect from deepfakes, so maybe we shouldn't be changing a fundamental internet law without actual evidence of the need. Second, defining "reasonable moderation practices" in law seems like a very, very dangerous idea. "Reasonable" to whom? And how? And how can Congress demand reasonable rules for moderating content without violating the 1st Amendment? I don't see how any proposed solution could possibly survive constitutional scrutiny. Finally, and most importantly, Citron is just wrong to claim that the current structure "leaves platforms with no incentive to address destructive deepfake content." As I said, I find Citron to be more thoughtful and reasonable than many critics of Section 230, but this statement is just bonkers. It's clearly false, given that YouTube has taken down deepfakes and Facebook has pulled them from algorithmic promotion and put warning flags on them. It certainly looks like the current system has provided at least some incentive for those platforms to "address destructive deepfake content." You can disagree with how these platforms have chosen to do things. Or you can claim that there need to be different incentives, but to say there are no incentives is simply laughable. There are plenty of incentives: there is public pressure (which has been fairly effective). There is the desire of the platforms not to piss off their users. There is the desire of the platforms not to continue to rain down angry rants from (and future regulations) from Congress. And, importantly, section (c)(2) of CDA 230 is there to encourage this kind of experimentation by the platforms. They are given the benefit of not facing liability for moderation choices they make, which is actually a very strong incentive for those platforms to experiment and figure out what works best for them and their particular community. Any effort to change the law to demand "reasonable moderation practices" is going to come up against difficult situations and create something of a mess. If we pass a law that forces Facebook to remove deepfakes, does that mean Facebook/Twitter and others would have to remove the various examples of deepfakes that are more comedic than election-impacting? For example, you may have recently seen a viral deepfake of Bill Hader on Conan O'Brien doing his Arnold Schwarzenegger impression, in which he subtly morphs into Swarzenegger. Would a "reasonable" moderation policy forbid such a thing: Also, different kinds of sites have wholly different moderation approaches. How do you write a rule that applies equally to Facebook, Twitter, YouTube... and Wikipedia, Reddit and Dropbox. You can argue that the first three are similar enough, but the latter three work in wholly different ways. Crafting a single solution that works for all is asking for trouble -- or will wipe away significant concepts on how to run online communities. I can completely empathize with the worries about deep fakes and what they could mean long term. But let's not use this moral panic and overreaction without evidence of harm to completely change the internet -- especially with silly claims falsely stating that there are no incentives for platforms to handle the problematic side of this technology already. Permalink | Comments | Email This Story

Read More...
posted about 18 hours ago on techdirt
Back in November of 2017 AT&T promised that if it received a tax break from the Trump administration, it would invest an additional $1 billion back into its network and employees. At the time, CEO Randall Stephenson proclaimed that "every billion dollars AT&T invests is 7,000 hard-hat jobs." Not "entry-level jobs," AT&T promised, but "7,000 jobs of people putting fiber in ground, hard-hat jobs that make $70,000 to $80,000 per year." Yeah, about that. The Trump tax cut resulted in AT&T getting billions in immediate tax relief, and roughly $3 billion in tax savings annually, in perpetuity. Yet when it came time for AT&T to re-invest this money back into its network and employees, AT&T actually did the opposite and began laying them off in droves. Unions claim AT&T has laid off an estimated 23,000 workers worldwide since the Trump tax plan, with investors and executives unsurprisingly pocketing the savings. This week, the word came down that AT&T would be laying off thousands more as it wraps up fiber deployment: "Leaked internal documents confirmed most of the 1,800 planned job cuts. One AT&T surplus declaration shows that more than 900 of the surplus jobs come from the company's Southeast division in Alabama, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, and Tennessee. This document attributes most of the cuts to "economic" reasons and some to "technological/operational efficiency." While AT&T had been deploying fiber as per a 2015 DirecTV merger condition (something the Trump FCC tried to credit to killing net neutrality), the company recently finished up those obligations, and now has returned to the default state of most US telcos: skimping on fiber upgrade investment thanks to limited competition. There's been nary a peep from Trump INC, because fattening investor and executive wallets, unless you're new here or exceedingly gullible, was the entire point. Objective experts say the cuts uniformly failed to deliver any of the numerous investment and employment promises made by Trump INC. Of course this is a shtick AT&T has been engaging in for decades now under both political parties. AT&T will promise a universe of jobs and network investment if it gets "X" (X=killing net neutrality, a tax break, eliminating consumer protections, approving a new merger, passing a law AT&T wrote), but then bails on following through. Nobody in the press or government much cares because AT&T's among the wealthiest and most politically powerful companies in the world. Nobody in the press much cares because covering failed AT&T promises on the tech policy front doesn't get hits, and most journalists are too young to remember the last forty times we've gone through this. While the press likes to suggest that the Trump administration is "at odds" with AT&T because his DOJ sued to block AT&T's attempted merger with Time Warner, that had more to do with pleasing Rupert Murdoch and angering CNN than any real disdain for AT&T. AT&T's largely been a loyal ally to the Trump administration ever since one of its lobbyists paid Michael Cohen $600,000 for additional access to the President, and if you step back and really try to calculate the billions AT&T has gleaned from Trump and Ajit Pai's apathetic tenures so far, you'll quickly get a nasty headache. Permalink | Comments | Email This Story

Read More...
posted about 21 hours ago on techdirt
Shortly after the Christchurch mosque shooting, the New Zealand government's censorship board decided to categorize almost everything related to the shooting (the shooter's manifesto, his livestream of the shooting, his social media posts) as "objectionable." This wasn't a case of reaching an obvious conclusion. Officially terming it "objectionable" made it a criminal act to distribute any of this content via social media or other services. Having done that, the government wasted no time bringing criminal charges against violators. The first arrest happened only two days after the shooting, netting the government an 18-year-old defendant. The more interesting arrest was the second one, which landed Phillip Arps, a local businessman with some not-so-latent white nationalist leanings. Arps spent the hours after the shooting refusing to condemn the violent act and -- the event triggering the criminal charges -- passing around footage of the shooting. Not all that surprising for a man whose company is named after a German prison camp and who charges $14.88 a foot for insulation installation. Since each count against Arps could have netted him a max 14 years in prison, the final sentence seems comparatively light. A businessman in New Zealand has been sentenced to nearly two years in prison for sharing footage of the Christchurch mosque attacks, which saw a lone gunman livestream the massacre of 51 Muslims during Friday prayers on March 15. Philip Arps, 44, was sentenced during a court hearing in Christchurch on Tuesday after having earlier pleaded guilty to two charges of distributing objectionable material. Arps will spend 21 months in prison for sharing footage of the shooting with 30 people. This sentence only seems reasonable in comparison to the 28 years he could have been hit with. What's not reasonable is putting someone in prison for sharing footage of a crime committed by someone else, no matter how objectionable their personal beliefs are. The government's immediate reaction to this tragedy has been emotionally-charged. This may make for speedy legislating, but first reactions are rarely the most thoughtful reactions. The government has criminalized the sharing of content the general public is going to naturally find interesting. They will seek it out and share it -- some out of curiosity and some to continue spreading their hate as thinly as possible. This behavior shouldn't be encouraged but it also shouldn't be criminalized. But legislators and the state censorship board saw an opportunity to make a statement -- one that came with prison sentences attached -- few in the nation would openly object to. This opportunism is going to result in some sketchy prosecutions in the future -- one far less clear-cut than the punishment of a New Zealand citizen for being an asshole. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
It's been nearly a decade since we last wrote about the Australian aborigine flag and the insane copyright issues surrounding it. That time, back in 2010, it involved the copyright holder of the flag forcing Google to edit the flag out of one of its famous Google doodles, where it had originally been included as part of an Australia Day celebration. The problem, as you might have guessed, is that the flag was designed in the early 1970s "as a symbol of unity and national identity" by Harold Thomas. Because it was the creation of a private individual, and not a government, Thomas claims to hold a copyright on the image. He didn't do much with that copyright for decades, while the flag became an established symbol for indigenous Australians. Then, suddenly, he discovered he held the copyright and started making use of it. Apparently, that's ramped up even more in the last few months after Thomas did a licensing deal with a clothing company, followed by the traditional "sending of the cease-and-desist letters." In October 2018, Thomas granted WAM Clothing worldwide exclusive rights to use the flag on clothing. Late last week, it issued a series of “cease and desist” notices to several companies, including the AFL, which uses the flag on jerseys for the Indigenous round, and an Aboriginal social enterprise which puts the profits of its clothing sales back into Aboriginal community health programs. A spokesperson for WAM Clothing said it had been “actively inviting any organisations, manufacturers and sellers who wish to use the Aboriginal flag on clothing to contact us and discuss their options”. “Until WAM Clothing took on the licence Harold was not receiving recognition from the majority of parties, both here and overseas, who were producing a huge amount of items of clothing bearing the Aboriginal flag,” the spokesperson said. Of course, some might argue that if you design a "flag" and declare that you did so "as a symbol of unity and national identity," and then allow that flag image to be used for decades in order to establish it as identifying indigenous Australians it is (1) kind of an obnoxious move to then register a copyright, license it and start sending out legal threats and (2) so blatantly obviously against anything having to do with copyright law. Thomas did not design the flag because of the incentives of copyright law, as even he admits. The idea that he then gets to benefit from that law that had nothing to do with incentivizing the creation seems quite ludicrous. Meanwhile, the mess has copyright lawyers in Australia suggesting that the government forcibly buy out Thomas' copyright: Former CEO of the Australian Copyright Council Fiona Phillips says the legal status of the Aboriginal flag is a “unique situation” that requires a public policy solution. [....] “The Aboriginal flag is not just an artistic work, it’s a national symbol and is particularly important to Indigenous Australians,” said Phillips, who has also worked at the Australian Competition and Consumer Commission and as a government adviser on copyright law. “The government could seek to compulsorily acquire copyright from Mr Thomas on public policy grounds. They could buy him out for the rights.” Yes, the government could do that, and it would still be fairly crazy. It seems like a better idea is recognizing that if you push something out there as a symbol for all to use, and then decades later come back with copyright demands, the copyright claims should be laughed at, rather than made real. Tragically, Australia went in the other direction, leading to the present mess. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
The Australian government approved an amended copyright law late last year that made subtle changes to what types of sites ISPs can be ordered to be blocked by the courts, and the process by which that order is obtained. Essentially, the changes amounted to allowing blocking of sites with the primary "effect" being copyright infringement, rather than the primary "purpose", along with an expedited process for getting additional site-blocking orders for sites that set up mirror sites to route around the blocks. Before the ink on the legislation was even dry, just as we warned, Village Roadshow and a bunch of American entertainment companies swooped into the court system to order blocks on all kinds of sites. And now it appears those groups were just getting started. After getting 181 domains blocked late last year, industry groups have decided to expand that with a recent request to block an additional 105 domains. Soon after, the same companies (plus Australian distributor Madman and Tokyo Broadcasting) returned to court with a new application to block 79 “online locations” associated with 99 domains. The order appears to have changed slightly since the original application. It now lists 104 domains spread across 76 allegedly-infringing platforms. Many of the sites are well-known torrent and streaming services, including StreamCR, Torrenting, TorrentLeech, AnimeHeaven, and HorribleSubs, to name just a few. It's a significant number of sites to be sure and it's all enabled by the change in the copyright law. It's worth keeping in mind that we're less than a year into the change in law, and the entertainment industry has already blocked something like 200 sites. Even if we were to stipulate the pirate-y nature of these sites, which we shouldn't, the speed at which this much wholesale blocking is being done is tremendous. On the topic of whether all the sites being blocked are pirate sites, at least one of those sites is attempting to defend itself. It’s extremely unusual for any sites to mount any kind of defense against blocking but earlier this year, Socrates Dimitriadis – the operator of Greek-Movies.com – did just that. “My site is just a search engine that refers users to third-party websites,” he explained in a letter to the Court. That appears to have held no sway with the Judge. Greek-Movies is the 15th site listed in the injunction, with ISPs required to target its main domain (greek-movies.com) and/or its IP address 136.243.50.75, using DNS, IP address or URL blocking, or “any alternative technical means”. This reveals the pernicious nature of the "purpose" to "effect" change in copyright law. There are simply no clear lines drawn here, which has now resulted in a site that does not host any infringing content being blocked under the argument that it's primary effect is still to effect copyright infringement. Precisely how long do you think it will take before someone in the music industry attempts to get YouTube blocked using that same argument? After all, there is a lot of infringement being done on YouTube, even though the primary purpose of the site is certainly not to commit copyright infringement. It sure seems like someone could do a statistical analysis of views and/or traffic on YouTube, mess with the data, and reach the conclusion that infringement is a primary effect of the site, no? Again, we're not even a year in. This is only going to get much, much worse. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Live streaming is here to stay, and it seems to be getting more popular by the minute — but for many people, it still seems like a foreign land and evokes a cliched "I feel old" response. This week, Mike is joined by not-so-regular-anymore co-host Dennis Yang, who has been experimenting with Twitch, to get a beginner's perspective on the platform, the community, and the medium of streaming. Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
So, if someone can be sentenced to two years in prison for 40 minutes of newspaper website defacement performed by a party other than himself, it stands to reason someone who took down five websites would be looking at a minimum of ten years in jail. Welcome to the hilarious and tragic world of CFAA-related sentencing. Matthew Keys was hit with a two-year sentence for sharing his login password, an act that resulted in someone else subjecting the L.A. Times website to a 40-minute inconvenience. The momentary vandalism of the site's landing page suggested Congressional representatives were being pressured to elect CHIPPY 1337. No. Seriously. That was the extent of the "damage." Once the DOJ decided this was worth pursuing under the CFAA, internal L.A. Times' emails regarding the "hack" suddenly cost $225/each to create. The feds wanted five years but settled for two. And while Matthew Keys served his sentence, no one in the federal government made any effort to locate the person who actually performed the website defacement. A more serious hacking -- one that resulted in five news websites being completely unreachable for a short period of time -- has netted the "hacker" involved with a very lenient sentence. The 36-year-old man who hacked and temporarily shut down Palo Alto Online and other Embarcadero Media websites nearly four years ago was sentenced Wednesday in San Jose federal court to time already served, one-year of home incarceration with electronic monitoring, three years of supervised release and $27,130 in restitution to the company. Ross Colby was indicted on April 6, 2017, following an investigation by the Federal Bureau of Investigation of the Sept. 17, 2015, crime, which took down five news sites owned and operated by Palo Alto-based Embarcadero Media: Palo Alto Online, Mountain View Online, Almanac Online, PleasantonWeekly.com and DanvilleSanRamon.com. Colby was convicted of all charges, but will only be serving zero years. The six months he spent in jail prior to his trial will be all the time he's required. Colby claimed -- during an interview with the FBI -- to have performed the hack at the request of a Menlo Park resident (Hiruy Amanuel) who wished to have stories about him removed from the websites. Amanuel, currently located in Ethiopia, denies he asked Colby to hack the sites. Like in the Keys' case, the end result was a temporary defacement. But this hack also made the sites' content unreachable by readers. The temporary damage Colby caused was far more significant than the minor prank pulled by someone (not Matthew Keys!) with Keys' login info. Colby deleted the content of all of Embarcadero's websites and replaced it with an image of Guy Fawkes, the icon of the activist group Anonymous, and posted a message stating: "Greetings, this site has been hacked. Embarcadero Media Group (Alamanac) (sic) has failed to remove content that has been harmful to the wellbeing and safety of others. Failure to honor all requests to remove content will lead to the permanent shutdown of all Embarcadero Media websites." Each website's URL was replaced with the text "Unbalanced journalism for profit at the cost of human right, Brought to you by the Almanac." So, why the disparity in sentencing? Well, it boils down to several things, starting with the law itself. The law is broad and vague and can be beaten to fit/painted to match almost any "unauthorized access." Furthermore, CFAA charges are confounding for juries, judges… even the DOJ itself. It's tough to assess the actual damages of a website defacement, so the DOJ relies on the aggrieved party, which has every motivation to portray momentary inconveniences as internet apocalypses. Meanwhile, judges and juries get swamped in techno-jargon, with no one to lead them in the promised land of "laymen's terms" but the prosecution. In Colby's case, a couple of attempts to get him perceived as incapable of standing trial tried the court's patience, as did Colby's hiding of a recording of his interview with the FBI. And yet, he got less time than Keys did for a more serious attack on multiple websites -- one Colby actually performed, rather than farmed out to a willing miscreant. Because the law makes so little sense, the outcomes will be nonsensical. The only hope is a complete rewriting of the law -- one that takes charging security researchers and internet jokesters out of the equation. The government may claim harsh sentences are needed to act as a deterrent, but this assertion makes no sense when it showed zero interest in finding the person who actually defaced a Tribune website with borrowed credentials. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Last fall I wrote about the Supreme Court agreeing to hear a case that some argued would allow the Supreme Court to declare that social media sites were public forums thereby limiting their ability to block or ban certain users. A key argument brought forth by many who have been kicked off of various social media platforms is that under a strained reading of both the Pruneyard case (a very narrowly ruled case, establishing malls as public forums) and the Packingham case (which said states cannot create laws that ban people from the internet), is that social media platforms like YouTube, Facebook and Twitter are some sort of quasi-public forums, and therefore the 1st Amendment applies to them as state actors... and therefore they can't ban anyone or block content. This has never made much sense, and required a pretty twisted reading of those other cases -- but there was some thought that this new case might allow the Supreme Court to weigh in on the subject. The details of the case are a bit involved -- and you can read the original post for more details -- but the short version is that two producers were fired from a public access channel, Manhattan Neighborhood Network, for criticizing MNN. The two fired producers, DeeDee Halleck and Jesus Melendez, argued that this violated the 1st Amendment, because MNN was set up by New York City's government, as required by New York State. Thus, there was a strong argument that MNN was a public forum, given the state's role in creating it. The 2nd Circuit agreed that it was a public forum and MNN appealed to the Supreme Court, raising the specter that if the ruling were allowed to stand, it could end up being applied to the various social media platforms as well, creating quite a mess. As I wrote in my post about it, this seemed like a stretch as well, since the state's role in creating MNN was a key factor here, and that was not at all true with social media platforms. I also thought that the Supreme Court would likely rule narrowly and avoid the issue of social media platforms altogether -- though, given the political climate, I feared that the Supreme Court would say something stupid on this and create a new mess. Instead, the ruling, which came out earlier this week, went in the opposite direction. While the ruling itself doesn't directly apply to social media, the Supreme Court actually reversed the 2nd Circuit ruling that declared MNN a public forum, and very strongly hinted that it's ridiculous to think social media platforms could be considered public forums. And, for all the so-called "conservatives" who have been the most vocal in promoting the theory that social media sites are public fora governed by the 1st Amendment, it might surprise them to find that it was the so-called "conservative Justices" who decided this one, with Kavanaugh writing the opinion, joined by Roberts, Thomas, Alito and Gorsuch -- and Sotomayor writing the dissent, joined by Ginsburg, Breyer and Kagan. Indeed, hysterically, it appears that a key argument made by the majority to argue against a finding of a public forum is one from one of the "conservatives" currently suing a platform. Stay tuned for that tidbit. But first, the decision itself. I was wrong in expecting the court to uphold the 2nd Circuit's ruling (and my fear was that they would apply it in a way that was too broad). But Kavanaugh and the majority make it clear that they see public forum doctrine to be very, very, very limited. And it doesn't apply to a public access TV network, even one created by the state. Under the state-action doctrine as it has been articulated and applied by our precedents, we conclude that operation of public access channels on a cable system is not a traditional, exclusive public function. Moreover, a private entity such as MNN who opens its property for speech by others is not transformed by that fact alone into a state actor. In operating the public access channels, MNN is a private actor, not a state actor, and MNN therefore is not subject to First Amendment constraints on its editorial discretion. The key to Kavanaugh's ruling is that to make a private entity a public forum, it needs to take over "powers traditionally exclusively reserved to the State." The "exclusively" part is what the majority focuses on. It is not enough that the federal, state, or local government exercised the function in the past, or still does. And it is not enough that the function serves the public good or the public interest in some way. Rather, to qualify as a traditional, exclusive public function within the meaning of our state-action precedents, the government must have traditionally and exclusively performed the function. The Court has stressed that “very few” functions fall into that category.... Under the Court’s cases, those functions include, for example, running elections and operating a company town.... The Court has ruled that a variety of functions do not fall into that category, including, for example: running sports associations and leagues, administering insurance payments, operating nursing homes, providing special education, representing indigent criminal defendants, resolving private disputes, and supplying electricity. And, the majority says, running a TV station also does not qualify. The relevant function in this case is operation of public access channels on a cable system. That function has not traditionally and exclusively been performed by government. And that's pretty much the ballgame for those arguing for a public forum designation even for this public access channel created by the state. However, Kavanaugh does go further in highlighting why it would be ludicrous to argue that social media sites, for example, would qualify and be subject to the 1st Amendment. As the opinion notes, just hosting a forum for speech does not magically turn you into a government actor hosting a "public forum." And then Kavanaugh goes even further, directly saying that a private entity can moderate all they'd like: By contrast, when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum. This Court so ruled in its 1976 decision in Hudgens v. NLRB. There, the Court held that a shopping center owner is not a state actor subject to First Amendment requirements such as the public forum doctrine.... The Hudgens decision reflects a commonsense principle: Providing some kind of forum for speech is not an activity that only governmental entities have traditionally performed. Therefore, a private entity who provides a forum for speech is not transformed by that fact alone into a state actor. After all, private property owners and private lessees often open their property for speech. Grocery stores put up community bulletin boards. Comedy clubs host open mic nights. As Judge Jacobs persuasively explained, it “is not at all a near-exclusive function of the state to provide the forums for public expression, politics, information, or entertainment.” And just to drive the point home: In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints. If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether. “The Constitution by no means requires such an attenuated doctrine of dedication of private property to public use.” ... Benjamin Franklin did not have to operate his newspaper as “a stagecoach, with seats for everyone.” ... That principle still holds true. As the Court said in Hudgens, to hold that private property owners providing a forum for speech are constrained by the First Amendment would be “to create a court-made law wholly disregarding the constitutional basis on which private ownership of property rests in this country.” ... The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property This is less important to the point we're discussing here, but if you're wondering why the majority said that even when the state creates the public access channel by law, it does not become a public forum, Kavanaugh explains that this would open up way too many charges of other private entities that require government licensing being charged as a public forum: Numerous private entities in America obtain government licenses, government contracts, or governmentgranted monopolies. If those facts sufficed to transform a private entity into a state actor, a large swath of private entities in America would suddenly be turned into state actors and be subject to a variety of constitutional constraints on their activities. As this Court’s many stateaction cases amply demonstrate, that is not the law. And it is noteworthy that the majority opinion makes it clear that some public access channels, if actually operated by the government, could count as a public forum, subject to the 1st Amendment. It just doesn't think MNN meets that criteria. Now, here's the ironic bit. Kavanaugh concludes the opinion with the following: It is sometimes said that the bigger the government, the smaller the individual. Consistent with the text of the Constitution, the state-action doctrine enforces a critical boundary between the government and the individual, and thereby protects a robust sphere of individual liberty. Expanding the state-action doctrine beyond its traditional boundaries would expand governmental control while restricting individual liberty and private enterprise. We decline to do so in this case. MNN is a private entity that operates public access channels on a cable system. Operating public access channels on a cable system is not a traditional, exclusive public function. A private entity such as MNN who opens its property for speech by others is not transformed by that fact alone into a state actor. Under the text of the Constitution and our precedents, MNN is not a state actor subject to the First Amendment. Cornell Law professor Michael Dorf found that first sentence that I bolded above a bit odd, as Kavanaugh doesn't quote that line, but just says "it is sometimes said." So he went hunting for where that quote originated, and it turns out that it originated with Dennis Prager. Remember Dennis Prager? He was actually one of the first to file a lawsuit making the ridiculous claim that YouTube is a public forum, subject to the First Amendment (after YouTube put just a small percentage of his videos into "restricted mode" and Prager freaked out, claiming "anti-conservative bias" despite the fact that YouTube put a far higher percentage of videos on what most people would consider to be more "liberal" channels into the very same restricted mode). Prager's lawsuit was laughed out of court, but it is still cited all the time by people who claim (1) anti-conservative bias by the platforms, and (2) that platforms are a public forum, and therefore subject to the First Amendment. Indeed, this is from Prager's original complaint: Despite their control and regulation of one of the largest forums for public speech and expression in California, the United States, and the world, Google/YouTube regulate and censor speech as if the laws governing free speech and commerce do not apply to it. In so doing, Defendants believe that they have unfettered, unbridled, and unrestricted power to censor speech or discriminate against public speakers at their whim for any reason, including their animus toward and political viewpoints of their public users and providers of video content, because Defendants are for profit organizations rather than governmental entities. Google/YouTube are wrong. As the California Supreme Court has stated: “[t]he idea that private property can constitute a public forum for free speech if it is open to the public in a manner similar to that of public streets and sidewalks” has long been the law in California. Fashion Valley Mall, LLC v. N.L.R.B. (2007) 42 Cal.4th 850, 858. The United States Supreme Court also recognized more than a half-century ago that the right to free speech guaranteed by the First Amendment to the United States Constitution can apply even on privately owned property. One of the most important places to exchange and express views is cyberspace, particularly social media, where users engage in a wide array of protected First Amendment activity on any number of diverse topics. And because the “[i]nternet’s forces and directions are so new, so protean, and so far reaching,” however, the U.S. Supreme Court warned that the law must be conscious that what it says today about the characteristics of a forum or free speech medium may be obsolete tomorrow. See Packingham v. North Carolina, 137 S.Ct. 1730, 1735-38 (2017). So, boy, is it ever ironic that in a Supreme Court ruling that completely and utterly debunks Prager's own legal theory, the "conservative" wing of the Supreme Court quotes (without citation) a line from Prager to defend why Prager is laughably wrong. That's delicious. Oh, and just in case the folks arguing that social media is a public forum think the dissenting "liberal" judges might save them here, that's not going to fly either. From Sotomayor's dissent: In addition, there are purely private spaces, where the First Amendment is (as relevant here) inapplicable. The First Amendment leaves a private store owner (or homeowner), for example, free to remove a customer (or dinner guest) for expressing unwanted views.... In these settings, there is no First Amendment right against viewpoint discrimination. So, uh, yeah. If you're arguing that private platforms like Facebook, YouTube and Twitter are magically "public fora" even as the Supreme Court is rejecting that designation for a public access channel that was literally created by the state, suffice it to say that you're argument is not going to go very far. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Unlike other eLearning courses that bog you down with dull voiceovers and boring videos, the Excel Data Analyst Certification School features real, hands-on projects to turn you into an Excel master; and you'll even have access to your own personal mentor to guide you along the way! You'll explore data manipulation, analytics and problem-solving, produce data visualizations and business intel reports, and much more. Complete the bootcamp, and you'll emerge with an interview-ready portfolio and a CPD accredited certification to back up your know-how. It's on sale for $49. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
So just over a year ago the FCC quickly rushed to kill net neutrality at telecom lobbyists' behest. As we noted last week, the repeal did far more than just kill net neutrality protections; it effectively freed uncompetitive telecom providers from most meaningful oversight. With a few notable exceptions, most ISPs have tried to remain on their best behavior for two reasons: one, they're worried about the ongoing lawsuit from 23 State AGs that could potentially restore the rules any day now. And two, they don't want to run afoul of the nearly two dozen states that passed their own net neutrality rules in the wake of the repeal. Of course this all occurred because of the Ajit Pai FCC claim that killing the rules would result in amazing broadband growth, competition, and investment. But as people keep digging into the numbers, they've (surprise!) increasingly realized that absolutely none of those promises ever materialized (and aren't likely to without more competition). The latest case in point comes courtesy of longtime journalist Rob Pegoraro, who again noted how that supposed investment boon never happened, and in fact many ISPs are already pulling back on investment thanks to limited competition and tepid regulatory oversight: "Figures USTelecom posted in February, for example, show Verizon cutting its investment by 3.4% from 2017 to 2018. And the 3.9% increase shown for AT&T (T) vanishes if you subtract the $1.2 billion the firm spent in 2018 on the government-backed FirstNet emergency-responder network. And last week, AT&T Communications CEO John Donovan told attendees at an investor conference that the firm would slow its fiber build-out." Funny, that. Pegoraro also spoke to a number of small ISPs Ajit Pai said would be particularly aided by the gutting of popular consumer protections. They similarly couldn't actually provide any concrete examples of how the killing of net neutrality aided them, outside of some vague, unsubstantiated claims that it was harder to get a bank loan with the rules in place. Before the repeal, Pai had circulated all manner of massaged data trying to suggest that small ISPs had been harmed by the fairly modest (by international standards) neutrality protections, something the FCC's own data disproved. And while the Pai FCC has been trying in recent weeks to suggest that the net neutrality rules resulted in a huge boost in investment and broadband speed, that too is based on flimsy, massaged data. For example the FCC has tried to claim that killing net neutrality resulted in historic fiber deployment, but at least half of last year's fiber growth was actually thanks to fiber build out conditions affixed by the previous FCC to AT&T's 2015 merger with DirecTV. And much of the data cited by the FCC showing broadband speed and coverage improvement was collected before net neutrality was even formally repealed. Go figure. On the flip side, Pegoraro notes that none of the doomsday scenarios portrayed by net neutrality advocates really occurred either. But he fails to note that's not because ISPs didn't want to. ISPs didn't want to dramatically shift their business models only to have the AG lawsuit restore the FCC's 2015 rules, making them suddenly out of compliance. And they didn't want to violate numerous state net neutrality laws that popped up in the wake of the repeal, several of which (like in Washington State) actually go a bit further than the original FCC rules did. Should the AG lawsuit against the FCC be victorious (a ruling should drop any day now), the FCC's 2015 rules could be restored, likely triggering an appeal and possible Supreme Court challenge. Should the AG lawsuit fail, it's likely ISPs will start being far less subtle about their efforts to abuse the lack of competition to nickel-and-dime you in creative new ways. But again, those claiming that net neutrality didn't matter because it's been a year and the internet didn't implode are really only advertising that they have no real idea what they're talking about. More importantly, a year's hindsight has made it clear none of the repeal's purported benefits were actually real. And they weren't real because the repeal had only one real purpose: to help entrenched telecom monopolies make more money on the back of a captive customer base. That tends to get lost in the verbose discussion about policy, but it doesn't make it any less true. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
You may have seen this story in various forms over the weekend, starting with a big Wall Street Journal article (paywall likely) claiming that Genius caught Google "red handed" in copying lyrics from its site. Lots of other articles on the story use the term "red handed" in the title, and you'll understand why in a moment. However, there's a lot of background to go over here -- and while many Google haters are making a big deal out of this news, after going through the details, it seems like (mostly) a completely over-hyped, ridiculous story. First, a little background: for pretty much the entire existence of this site, we've written about legal disputes concerning lyrics sites -- going all the way back to a story in 2000 about LyricFind (remember that name?) preemptively shutting itself down to try to work out "licensing" deals for the copyright on lyrics. Over the years, publishers have routinely freaked out and demanded money from lyrics sites. As we've pointed out over and over again, it was never clear how this made any sense at all -- especially on crowd sourced lyrics sites. It's not as though lyrics sites are taking away from the sales of the music -- if anything, they're the kinds of thing that connects people more deeply to the music and would help improve other aspects of the music business ecosystem. Over time, however, more and more sites realized that it was just easier to pay up than fight it out in court. One of those sites was Genius -- originally "RapGenius" -- which was called out by the National Music Publisher's Association as one of the "worst" infringers out there a few years back. Genius eventually caved in and agreed to license lyrics, despite incredibly strong fair use claims (since the whole point of Genius was to allow for annotation and commentary). However, in this latest case, it's now Genius that's complaining about someone else copying its content. Except... it's not Genius' content. This is what makes the story bizarre -- which we'll get to in a moment. However, first, it is worth highlighting the somewhat fun way in which Genius apparently "caught" Google using content from the Genius site as its source material. Basically, Genius hid a code in whether it used "straight" apostrophes or curly "smart" apostrophes: Starting around 2016, Genius said, the company made a subtle change to some of the songs on its website, alternating the lyrics’ apostrophes between straight and curly single-quote marks in exactly the same sequence for every song. When the two types of apostrophes were converted to the dots and dashes used in Morse code, they spelled out the words “Red Handed.” The WSJ shows the following example from a snippet of lyrics from the Alessia Cara song "Not Today." So, this secret bit of encoding is kinda clever (as is the use of Morse code and the message it spells out). But, despite everyone freaking out over this, there's a pretty big question? Does this even matter? And while CBS stupidly and incorrectly claims that Genius is suing Google over this, there's no indication of any actual lawsuit yet, and it's not clear what they could actually sue over. First off, despite the headline accusations against Google, Google (1) licenses lyrics itself, and (2) gets them from LyricFind (remember them?) with whom it signed a big deal a few years back. This raises a few different issues. First, as law professor Annemarie Briday noted, since both Genius and Google license the lyrics, it doesn't really matter where they're sourced from, as regards to copyright law: Interesting competition/vertical integration issues here (@HalSinger), but no © issues. If the reproduced lyrics are licensed, the actual source doesn’t matter. What am I missing? https://t.co/kGhQX37HIz — Annemarie Bridy (@AnnemarieBridy) June 16, 2019 Indeed, in many ways, the situation reminds me of an important Supreme Court ruling from 1991 in Feist v. Rural Telephone Services. In that case, you had a telephone company that inserted fictitious residents and phone numbers into its phone books to catch anyone copying straight from its own directory. And, indeed, it caught Feist ("red handed") copying directly from its phone books, by finding the fictitious entries in Feist's directory. However, as the Supreme Court noted, there was no copyright interest in phone numbers, which were factual information. This was the case that explicitly rejected the "sweat of the brow" theory of copyright, saying that you only get copyright in new creative works. This situation is not identical, because there is clearly a copyright interest in the lyrics, but since both parties (actually, all three parties, if we include LyricFind) have properly licensed the works, then we're in the same basic legal framework. Anyone who is allowed to post these lyrics can and should be able to get them from anywhere. The second reason why this story is likely all hype and no substance is the role of LyricFind. Google basically said "hey, we just get stuff from our partners, and don't scrape, so if there's a problem... it's from our partners." From the WSJ: In a written statement, Google said the lyrics on its site, which pop up in little search-result squares called “information panels,” are licensed from partners, not created by Google. “We take data quality and creator rights very seriously and hold our licensing partners accountable to the terms of our agreement,” Google said. After this article was published online Sunday, Google issued a second statement to say it was investigating the issue raised by Genius and would terminate its agreements with partners who were “not upholding good practices.” LyricFind separately denied copying from Genius, but that seems like a more likely culprit. In its own blog post, LyricFind says that it supplied the lyrics for Google, but denies copying from Genius, and says that the WSJ got a bunch of facts wrong: The lyrics in question were provided to Google by LyricFind, as was confirmed to WSJ prior to publication. Google licenses lyrics content from music publishers (the rightful owner of the lyrics) and from LyricFind. To accuse them of any wrongdoing is extremely misleading. LyricFind invests heavily in a global content team to build its database. That content team will often start their process with a copy of the lyric from numerous sources (including direct from artists, publishers, and songwriters), and then proceed to stream, correct, and synchronize that data. Most content our team starts with requires significant corrections before it goes live in our database. Some time ago, Ben Gross from Genius notified LyricFind that they believed they were seeing Genius lyrics in LyricFind’s database. As a courtesy to Genius, our content team was instructed not to consult Genius as a source. Recently, Genius raised the issue again and provided a few examples. All of those examples were also available on many other lyric sites and services, raising the possibility that our team unknowingly sourced Genius lyrics from another location. As a result, LyricFind offered to remove any lyrics Genius felt had originated from them, even though we did not source them from Genius’ site. Genius declined to respond to that offer. Despite that, our team is currently investigating the content in our database and removing any lyrics that seem to have originated from Genius. The company also pointed out that Genius has only identified approximately 100 songs that were copied, and it has 1.5 million in its database, suggesting that it's not in the business of regularly copying from Genius. But, again, it's not clear why it would really matter that much either way, since everyone is licensed. To put it another way: Genius' license to the lyrics does not grant it any other rights beyond being able to display those lyrics itself. It has no exclusivity. It doesn't hold the copyright. And, no, changing a few apostrophes is unlikely to meet the creative bar to get a new derivative copyright. So, there's no additional right that Genius has to stop another licensee from using its version of the lyrics. There's a separate issue here worth noting as well: all of this demonstrates just how idiotic the whole "licensing of lyrics" business is -- considering that what everyone here is admitting is that even when they license lyrics, they're making it up much of the time. Specifically, what people are noting is that they license lyrics from the publishers, but the publishers themselves rarely even have or know the lyrics they're licensing, so lyrics sites try to figure them out themselves and "create" the lyrics file which may or may not be accurate. Indeed, the WSJ reports that this is why Genius first became suspicious of Google -- because on one particularly difficult to understand track, it had reached out to the musician directly for the lyrics, and then was surprised to see the same version on Google. But... if the publishers don't even know they lyrics they're licensing, then what the fuck are they licensing in the first place? The right to try to decipher the lyrics that they supposedly hold a copyright on? Really? One music guy suggested that a different "source" of the "problem" (if you do consider it a problem), is that since the publishers have no idea what the lyrics are anyway, THEY might be sourcing the lyrics themselves from Genius and then passing them along to Genius. In short, the whole lyrics/copyright space remains a clusterfuck. But it's difficult to see how it's at all Google's fault. Some people have raised a few other possible legal arguments that have at least somewhat more merit than any copyright claim, but still strike me as incredibly weak. First, there's the argument that this kind of scraping violates Genius' terms of service. Of course, that opens a Pandora's box of how enforceable click through terms of service actually are (though, many courts have found them binding). But then it will matter quite a bit as to whether or not it was actually Google, LyricFind or someone else who copied the content from Genius. And, given everything discussed above, tracking that down seems almost impossible -- and given the low number of "copied" lyrics found, it certainly doesn't appear to be a major automated scraping situation. The other possibility that a few people have suggested, is that there are competition/antitrust arguments to be made against Google here. That argument boils down to Google using its size and position to abuse that position to harm the competitor Genius. And... maybe? There was a similar complaint a few years ago about Google apparently sinking a site that tried to estimate the "net worth" of celebrities, by posting that info in a Google "answer box" and not having people click through to the celebrity site. But that argument also strikes me as incredibly weak for multiple reasons. First, if Google putting the info from your site in an answer box destroys all your traffic and your business, then, um, you didn't have very much of a business in the first place, and it's not clear that your site really adds that much value. At the very least, it suggests that your business is not that defensible. In the case of Genius, the site has long insisted that its real value was in the annotations, not just the lyrics. But now it's complaining that showing just the lyrics (which tons of other sites also have) is somehow removing traffic? That's... weak. Second, going by consumer benefit alone, Google has a pretty strong argument that displaying this information (and again, with lyrics, it's all properly licensed) is a lot better for the users than shunting them off to a third party site. And, third, as evidence above, it doesn't appear that Google actually did anything here at all. Other than properly licensing lyrics via LyricFind. It's very difficult to see how there's any antitrust issue with that. In the end, this is an interesting story -- especially in highlighting how Google was "caught" -- but it's hard to see what the actual legal problem is here. There are plenty of reasons to be concerned about Google, but the fact that its properly licensed lyrics matches someone else's properly licensed lyrics, doesn't seem like one of them. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
The awful Article 13/17 of the EU's Copyright Directive only seems to have passed thanks to some MEPs voting for it by mistake. But the European Parliament was not the only arm of the European Union where there was strong resistance to the awful ideas contained in the upload filter proposal. Some individual governments were also against aspects of the law. For example, right at the end of the legislative process, in April 2019, no less than seven EU nations expressed their serious concerns. One of them was Poland, which issued a joint statement (pdf) with the Netherlands, Luxembourg, Italy and Finland, including the following: We believe that the Directive in its current form is a step back for the Digital Single Market rather than a step forward. Most notably we regret that the Directive does not strike the right balance between the protection of right holders and the interests of EU citizens and companies. It therefore risks to hinder innovation rather than promote it and to have a negative impact the competitiveness of the European Digital Single Market. Furthermore, we feel that the Directive lacks legal clarity, will lead to legal uncertainty for many stakeholders concerned and may encroach upon EU citizens' rights. We therefore cannot express our consent with the proposed text of the Directive. Unfortunately, in the final vote, these countries were outvoted by the other EU Member States, and the Directive was passed. However, it seems that is not the end of the story. On May 23, the official Twitter account of the Chancellery of the Prime Minister of Poland tweeted as follows, re-stating the points made in the joint statement: Tomorrow #Poland brings action against copyright directive to CJEU. Here's why. #Article13 #Article17 #ACTA2 Why is Poland concerned about the Copyright Directive? The directive does not ensure a balance between the protection of right holders and the interests of EU citizens & EU enterprises. The directive does not ensure legal clarity, fostering legal uncertainty for stakeholders and endangering the rights of EU citizens. It could have a negative impact on the competitiveness of the European digital single market. There is a risk that it will hinder innovations instead of promoting them. Those criticisms are made even more pointed by the reference to ACTA -- the Anti-Counterfeiting Trade Agreement that Polish citizens played an important part in helping to defeat in 2012. Using the hashtag #ACTA2 is a clear attempt to frame the Copyright Directive as more of the same bad stuff -- with the hope that it will suffer the same fate. And yet despite that tantalizing tweet, the Polish government failed to provide any more details about what exactly its legal challenge against the Copyright Directive at the EU's top court, the Court of Justice of the European Union, (CJEU), involved. We do know that the complaint has been submitted, because the action has been assigned an official case number, C-401/19, but with all the fields containing placeholders at the time of writing. Tomasz Targosz, from the Institute of Intellectual Property Law, Jagiellonian University Kraków, has written an interesting post on the Kluwer Copyright Blog about the Polish move. In it, he provides invaluable information about the political context for this unexpected development. He points out that the failure to publish the official complaint may indicate that the argument it employs is weak, and unlikely to stand up to expert scrutiny. But Targosz goes on to make the following important point: No matter how the complaint is argued in terms of the legal quality of reasoning, it may be effective as long as there are no obvious formal errors. The issue at stake will garner so much attention that the arguments the Court will have to consider will go way beyond the initial complaint. We can expect numerous and voluminous publications, position papers, etc., spelling out all the legally relevant factors (especially as so much has been already said). The complaint can therefore be compared to lighting the fuse. Whether any explosion will result from it is not certain, but sometimes even a tiny spark suffices. That is, it seems likely that now that the formal complaint process has begun, the CJEU will be duty-bound to consider in depth all the issues raised. This will therefore provide a fresh opportunity for people to make the familiar arguments about why the Copyright Directive is so flawed, especially its implicit requirement for upload filters. Moreover, this time it is not fickle and highly partial politicians that will be deciding, but the staid and rather more independent senior judges of the CJEU. As we've seen in the past, they have no hesitation is overturning at a stroke pivotal EU laws that have taken years to draft and pass. Although it's impossible to predict what the CJEU will rule on this matter, it certainly seems that there is still hope that some or all of the Copyright Directive could still be thrown out. For those who feared it was all over, never say never. Follow me @glynmoody on Twitter, Diaspora, or Mastodon. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
The Ninth Circuit Court of Appeals has just handed down a refresher [PDF] on a few legal issues, most notably what is or isn't "reasonable" when it comes to suspicion. Police officers thought an anonymous tip about a man carrying a gun and someone running away from them created enough suspicion to chase down Daniel Brown, stop him at gunpoint, and search him for contraband. Contraband was found, leading to Brown's motion to suppress. The lower court said this combination -- an anonymous report of a gun and Brown's decision to run when he saw the police cruiser -- was reasonable enough. Not so, says the Ninth Circuit, pointing out the obvious fact that a person carrying a gun can't be inherently suspicious in a state where carrying a gun in public is permitted. In Washington State, it is presumptively lawful to carry a gun. It is true that carrying a concealed pistol without a license is a misdemeanor offense in Washington. See RCW §§ 9.41.050(1)(a) (“[A] person shall not carry a pistol concealed on his or her person without a license to carry a concealed pistol . . . .”), 9.41.810 (explaining that any violation of the subchapter is a misdemeanor “except as otherwise provided”). However, the failure to carry the license is simply a civil infraction. There was no reason for officers to assume Brown's gun was unlicensed. Since carrying a gun in Washington is "presumptively legal," the officers would have needed more info than they had to perform a stop to just to ask Brown for his carry license. The anonymous tip officers received said only that a YWCA resident had approached the desk and said they'd seen a man with a gun. No further information was given by the tipster. Faced with the weakness of the tip and the presumptive legality of gun ownership, the police then argued Brown might have been illegally "displaying" his gun to "cause alarm." But the court denies this argument -- first raised on appeal -- as being no better than assuming Brown's mere gun possession was enough to justify a stop. Faced with this reality, the government now argues that the officers suspected that the manner in which Brown was carrying his gun was unlawful: it is “unlawful for any person to carry, exhibit, display, or draw any firearm . . . in a manner, under circumstances, . . . that warrants alarm for the safety of other persons.” RCW § 9.41.270. Never mind that nothing in the record could support such a finding. No evidence shows that the resident was alarmed at the time she reported seeing the gun. There is no report that she yelled, screamed, ran, was upset, or otherwise acted as though she was distressed. Instead, the 911 call reported only that the resident “walked in” and stated “that guy has a gun.” Finally, the government argued that Brown's decision to flee when he saw police officers was inherently suspicious. Again, the court says this is wrong. While fleeing officers can be suggestive of wrongdoing, it is only one factor and it's one heavily influenced by the deteriorated relationships many law enforcement agencies have with the communities they serve. The Ninth Circuit quotes Supreme Court Justice John Paul Stevens, who put this in his dissent from the Court's 2000 decision in Illinois v. Wardlow: Among some citizens, particularly minorities and those residing in high crime areas, there is also the possibility that the fleeing person is entirely innocent, but, with or without justification, believes that contact with the police can itself be dangerous, apart from any criminal activity associated with the officer’s sudden presence. The Appeals Court adds to this, saying not much has improved since Justice Stevens authored his dissent: In the almost twenty years since Justice Stevens wrote his concurrence in Wardlow, the coverage of racial disparities in policing has increased, amplifying awareness of these issues. [...] Although such data cannot replace the “commonsense judgments and inferences about human behavior” underlying the reasonable suspicion analysis, Wardlow, 528 U.S. at 125, it can inform the inferences to be drawn from an individual who decides to step away, run, or flee from police without a clear reason to do otherwise. See id. at 133 (“Moreover, these concerns and fears are known to the police officers themselves, and are validated by law enforcement investigations into their own practices.” (footnote omitted)). Attached to this paragraph is a footnote quoting the DOJ's investigation of the Seattle Police Department -- the one involved in the arrest at the center of this case. The 2011 report found the Seattle PD routinely deployed "unnecessary and excessive force" and engaged in "racially discriminatory policing." The court goes on to say this isn't just a problem with the Seattle PD, but law enforcement in general, which gives plenty of people all the reason they need to dodge interactions with law enforcement. Given that racial dynamics in our society—along with a simple desire not to interact with police—offer an “innocent” explanation of flight, when every other fact posited by the government weighs so weakly in support of reasonable suspicion, we are particularly hesitant to allow flight to carry the day in authorizing a stop. The public isn't obligated to stop just because an officer says, "Stop." In this case, the officers said nothing until Brown was already running. Lots of people have zero interest in talking to the police. Some don't want the hassle. Most don't enjoy the experience. And some suspect they'll probably end up arrested or dead, even if they haven't done anything wrong. If law enforcement doesn't like the way this decision breaks, it really can't blame anyone else for the public's reaction to the unexpected presence of officers. Even the tipster said she didn't want to talk to an officer because, according to the YWCA rep speaking to the dispatcher, she "[does not] like the police." Running from cops isn't inherently suspicious. Far too often, running from cops just makes sense. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
We've noted for many years that (like so many "internet of things" devices) modern smart televisions have the security protection equivalent of damp cardboard. Not only are they often easily hacked (something intelligence agencies are super excited about since it gives them audio access to targets), but the companies that make them have been busted repeatedly for hoovering up user usage data (and even audio from your living room), and then failing to adequately secure it. This week, Samsung took a bit of heat for urging the company's TV customers to, for the first time, occasionally run an antivirus scan on their television sets. The Tweet was online online briefly before Samsung deleted it, apparently realizing it only advertised the fact that you shouldn't be getting viruses on your TV set in the first place: That's amusing for several reasons. One, because customers wouldn't be getting viruses on their television sets if these products had even the most basic security protections, something TV vendors have failed at for years. Two, because it highlights how many modern televisions have become insanely complicated. Not because consumers necessarily want them to be insanely complicated, but because most TV vendors want you using their embedded streaming platforms and as opposed to a third-party streaming device (like Roku, Chromecast, or a game console). And of course they want you using their streaming platforms because they want to monetize your viewing and other profitable data. As a Vizio executive recently acknowledged, this can help subsidize the cost of cheaper TV sets. That creates a dilemma whereby the consumer is forced to pay a premium if they want a TV set that simply displays a god-damned image and doesn't hoover up their personal data: This is a real "privacy as a luxury product" dilemma: you can get a great 55-inch inch TV for $500, but it's full of ads that subsidize the price over time and create stable revenue for the manufacturers. Would you buy a dumb TV with the same specs for $1500? — nilay patel (@reckless) June 17, 2019 The problem is if you've shopped for a TV lately, it's effectively impossible to find a "dumb" television that simply passes on signal from other devices. As in: they're simply not available at any meaningful scale, even if you were willing to pay a significant premium for them. Many people certainly are; most embedded TV OS platforms are kind of terrible, and users would rather buy a new streaming box (Roku, Chromecast, Apple TV) every few years than be forced to buy an entirely new TV set because the embedded streaming hardware becomes outdated (something TV vendors clearly would benefit from). While some set vendors might argue that dumb televisions don't exist because there's no market demand for them, the fact is they haven't even bothered to try. And they haven't bothered to try because they're fixated on accelerating the TV upgrade cycle and collecting and selling your personal usage data to a universe of partners. Which again, might not be quite as bad if these companies had done a good job actually securing and encrypting this data, or designing television OS' that didn't feel like they were barfed up from the bowels of 1992 GUI design hell. It's all kind of a silly circle dysfunction but pretty standard operating procedure in the internet of broken things era, where an endless list of companies now sell over-hyped internet-connected appliances, gleefully collect and monetize your data, but can't be bothered to adequately secure that data or provide consumers with clear options to avoid data collection entirely. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
The process may have taken forever, but Paul "welcome to the big leagues" Hansmeier, who was the apparent mastermind behind the Prenda copyright trolling scam has finally been sentenced to 14 years in prison, and told to repay $1.5 million to 704 victims of his scam. We've been covering the actions of Hansmeier and his partner in crime, John Steele, going back many, many years now. None of us have the time to recount all of the many scams they've pulled, but they took copyright trolling to new lows. They tried using Florida's "pure bill of discovery" rules to try to abuse the system to get names to shakedown based on IP addresses. They sent totally unqualified and unprepared "associates" into courts to try to hide their own involvement in cases, they abused the CFAA by pretending movies they uploaded themselves were "hacked" in an attempt to get around restrictions on copyright trolling, they got someone they threatened to sue to basically take a dive in order to get access to other people to shake down (and then they went after that guy anyway). Oh, and then there was the whole thing about setting up their own fake movie production house, creating their own porn films to upload themselves, and then pretending in court that they were not the owners of the company in questions. And we don't even have much time to get into the time Steele tried to forge the signature of his housekeeper to pretend he was the actual officer of one of those fake shell companies. Over and over and over again, Hansmeier and Steele played every possible game with a single focus in mind: getting names of people to send threatening shakedown letters to. And, apparently, they took in about $6 million over the years -- though a bunch of civil cases have forced them to cough up plenty of that before the criminal charges came down. And there is no indication that Hansmeier had any regrets about all of this. Even after his arrest, he (and his wife) engaged in an analogous scheme of ADA trolling, looking for small businesses who might technically violate the ADA, and demanding cash from them to avoid a lawsuit. Hansmeier is facing an investigation over that as well. Oh, and then there was the whole bankruptcy fraud thing. Seriously, the list goes on and on and on and everytime you think you remember it all, you're reminded of some other really sketchy thing Hansmeier and Steele did. So it should probably come as little surprise that the judge in the case was not impressed, and even said he considered giving him even more time in jail: "It is almost incalculable how much your abuse of trust has harmed the administration of justice," [Judge Joan] Ericksen said during the sentencing hearing in U.S. District Court in Minneapolis. [....] Ericksen said she considered going beyond sentencing guidelines but decided instead to impose the maximum of that range, followed by two years of supervised release. She ordered Hansmeier to pay restitution, calling it a conservative toll for his crimes. While the amount of money is significant, she said, "that's not even a major part of the harm" he'd done with his scheme. "The major harm here is what happens when a lawyer acts as a wrecking ball," Ericksen said. She did actually sentence him to longer than the DOJ requested, but only said he has to pay back $1.5 million, as that's the amount they apparently received after they started posting torrent files themselves in order to track down people to shake down. John Steele's sentencing is still to come, though, unlike Hansmeier, Steele actually started cooperating much earlier, meaning it's likely that he'll get a somewhat shorter sentence. As I said years ago, Steele and Hansmeier remind me of some people I've met over the years who basically seem to think that they can talk their way out of anything, and thus lie and scam with impunity, and when caught, just keep thinking they can talk their way out of that as well. In this case, it finally caught up to them. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
This one combines a few stories that we've covered a lot over the years, showing how they're intersecting. For some time now we've been covering the US's evidence-free attacks on Huawei, the Chinese telco equipment giant. Basically, for years, there have been stories insisting that Huawei is too closely linked to the Chinese government, leading to fear mongering stories saying that the company should be effectively barred from the US. However, multiple attempts to find security flaws in Huawei's products have failed to show any kind of backdoors, and the fact that US-based Huawei competitors often seem to be making the loudest noises about the Chinese giant should raise some eyebrows. The other story we've covered a lot is around China and patents. For years and years, US companies (and policymakers) would go on and on about how Chinese companies didn't respect US patents, and demanding that China "must respect our IP." As we've highlighted for years, the Chinese government realized a decade or so ago that since the US kept trying to apply diplomatic pressure to "respect patents," China realized it could just start using patents as an economic weapon. The number of patents granted in China started to shoot up, and (surprise surprise) suddenly in legal disputes, Chinese companies were using patents to block American competitors. And the US couldn't really complain since it was the US that demanded China "respect patents" so much. Just a few weeks ago, we noted that China was gearing up to respond to Donald Trump's ignorant trade war by using patents against US companies. Put it all together, and it should be no surprise at all that Huawei is now demanding $1 billion from Verizon for patent infringement. Verizon is reportedly using equipment through other companies that relies on Huawei patents covering core networking gear, internet of things technology and wireline infrastructure. Verizon and Huawei representatives met last week in New York to talk about whether the gear could infringe on Huawei patents, Reuters said. "These issues are larger than just Verizon," a Verizon spokesman told Reuters. "Given the broader geopolitical context, any issue involving Huawei has implications for our entire industry and also raises national and international concerns." The US government walked right into this. For years it's been demonizing Huawei without evidence, while at the same time demanding that China respect patents. So, of course, it opened itself right up to Huawei now claiming patent infringement against US companies. Even better, it's over third party gear. US policymakers can't seem to think more than a single move ahead, because it was fairly obvious how all of this would play out years ago, and yet they walked right into it. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
The war on fan-made subtitles waged by the entertainment industry has been going on for a long, long time. While fansubs could, and probably should, be viewed as a potential boon to the entertainment industry, allowing those in far-flung lands to suddenly enjoy its products, fansubs have instead been painted as an aid to pirated content overseas or, in some cases, as copyright infringement themselves, given that they essentially copy parts of the content scripts. If nothing else is clear as a result of this introduction, it should be that major industry players absolutely hate fansubs. ... Except when they can make use of them, apparently, as Comcast-owned Swiss broadcaster Sky had been found using fansubs in its streaming service in the dumbest way possible. Subscribers of the local Sky platform who watch the last episode of the hit series Chernobyl, with English subtitles enabled, see the following message appearing around the five-minute mark. “- Synced and corrected by VitoSilans – www.Addic7ed.com.” Addic7ed.com is a fansub site. Asked for comment, reps for the site said they didn't care at all that Sky was using their work. Instead, they claim to have started the site to get content out to more people so that specific language wasn't a barrier to enjoyment. Still, it must have been at least slightly jarring to see Sky essentially forget to strip out Addic7ed's signature for its own work. Using someone else's work as one's own by way of copying it is about as close an analogy to copyright infringement as one could get. Sky, meanwhile, ain't talking. Sky Switzerland hasn’t responded to our request for comment at the time of publication. Whether the Addic7ed credit was left in intentionally is highly doubtful though. It seems more likely that someone forgot to remove it. In any case, the mention hasn’t gone unnoticed either. At least one person has alerted Sky via Twitter, but the company didn’t respond there either. It's the hypocrisy here that's worth highlighting, because the industry regularly rails against fansub sites and here is a member of that industry using them in its own product. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
Listen to your favorite music or podcasts for longer, with fewer distractions thanks to these True Wireless Bluetooth Fitness Headphones. These are some of the smallest and most lightweight earphones on the market. They're ergonomically designed to sit comfortably in and around your ear so you can jog, hit the gym, or do your daily commute without having to worry about them falling out. They're on sale for $45. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
Alexis Madrigal, over at the Atlantic has a mostly interesting piece recounting the history of how the big internet companies started calling themselves platforms. The history is actually pretty fascinating: There was a time when there were no “platforms” as we now know them. That time was, oh, about 2007. For decades, computing (video games included) had had this term “platform.” As the 2000s began, Tim O’Reilly and John Battelle proposed “the web as a platform,” primarily focusing on the ability of different services to connect to one another. The venture capitalist Marc Andreessen, then the CEO of the also-ran social network Ning, blasted anyone who wanted to extend the definition. “A ‘platform’ is a system that can be programmed and therefore customized by outside developers,” he wrote. “The key term in the definition of platform is ‘programmed.’ If you can program it, then it’s a platform. If you can’t, then it’s not.” My colleague Ian Bogost, who co-created an MIT book series called Platform Studies, agreed, as did most people in the technical community. Platforms were about being able to run code in someone else’s system. This was Facebook’s original definition of its product, Facebook Platform, which allowed outside developers to build widgets and games, and extend the core service. In the years before 2016, nearly all of Mark Zuckerberg’s public references to Facebook as a platform were technical, about connecting with developers. Amusingly, this actually reminded me of articles I had written over a decade ago, talking up why Google and Facebook needed to become a new kind of internet platform -- which I meant in the same manner as Madrigal describes above and which most people talking about "platforms" meant in the mid-aughts. It meant a system on which others could develop new applications and services. I have to admit that I don't know quite how and when the world switched to calling general internet services "platforms" instead, and I'm just as guilty of doing so as others. I have two quick thoughts on why this may have happened before I get back to Madrigal's piece. First, many of the discussions around these big internet companies didn't really have a good descriptive term. When talking about the law, things like Section 230 of the Communications Decency Act refer to them as "interactive computer services" which is awkward. And the DMCA refers to them as "service providers," which is quite confusing, because "internet service provider" has an existing (and somewhat different) meaning, as the company who provides you internet access. Ideally, those company should be called "internet access providers" (IAPs) rather than ISPs, but what's done is done. And, then of course, there's the equally awkward term "intermediary," which just confuses the hell out of most non-lawyers (and some lawyers). So "platform" came out in the wash as the most useful, least awkward option. And if Madrigal's piece had just stuck with that interesting historical shift, and maybe dug into things like I did in the previous paragraph, that might be really compelling. Unfortunately, Madrigal goes a step or two further -- and one that goes right up to the line (though it doesn't totally cross it) of suggesting that there's some legal significance to calling oneself a platform. This is something we've seen too many reporters do of late, spreading a false impression that internet "platforms" somehow get magic protections that internet "publishers" don't get. As we've explained there is literally no distinction here. Usually people are making this argument with regards to CDA 230's protections, but as we've discussed in great detail that law makes no distinction between a "platform" and a "publisher." Instead, it applies to all "interactive computer services" including any publisher, so long as they host 3rd party content. Madrigal's piece doesn't call out CDA 230 the way others have, but, unfortunately, his piece absolutely can be read in a misleading way to suggest that there is some magical legal distinction here that matters. Specifically this part: This new rhetorical device wasn’t just for press releases, but also for ginning up business and creating a legal architecture. Uh, what "legal architecture"? Again, CDA 230, the key law in this area, makes no special distinction for "platforms." There was no need for a "rhetorical device" to consider yourself protected (and there still isn't). Nothing in calling oneself a platform set up any legal architecture, no matter how many ignorant people on Twitter claim it is so. Unfortunately, someone who has already heard that false claim is likely to read Madrigal's piece as a confirmation of that incorrect bit of info. So, let's be clear, once again and state that there is no special legal distinction for "platforms," and it makes no difference in the world if an internet company refers to itself as a platform, or a publisher (or, for that matter, an instigator, an enabler, a middleman, a gatekeeper, a forum, or anything). All that matters is do they meet the legal definition of an interactive computer service (which, if they're online, the answer is generally "yes"), and (to be protected under CDA 230) whether there's a legal question about whether or not they're to be held liable for third party content. Some people may want the law changed. And they may think that "internet platforms" should require some specific rules and regulations -- including silly, unenforceable ideas like "being neutral," -- but that's got nothing to do with the law today, and any suggestion that it does is simply incorrect. Permalink | Comments | Email This Story

Read More...