posted 12 days ago on techdirt
Although it can be confusing and overwhelming, it's absolutely essential that you have at least a basic knowledge of finance. Whether you're pursuing a career in the finance industry or you just need a solid refresher on important concepts, the eduCBA Finance and Investments Bundle can help you out. With access to 700+ courses, you'll develop an understanding of investment banking, financial modeling, project finance, private equity, accounting, and more. This bundle is on sale for $29. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
In recent months, both Deputy Attorney General Rod Rosenstein and FBI Director Christopher Wray have been calling for holes in encryption law enforcement can drive a warrant through. Both have no idea how this can be accomplished, but both are reasonably sure tech companies can figure it out for them. And if some sort of key escrow makes encryption less secure than it is now, so be it. Whatever minimal gains in access law enforcement obtains will apparently offset the damage done by key leaks or criminal exploitation of a deliberately-weakened system. Cryptography expert Riana Pfefferkorn has released a white paper [PDF] examining the feasibility of the vague requests made by Rosenstein and Wray. Their preferred term is "responsible encryption" -- a term that allows them to step around landmines like "encryption backdoors" or "we're making encryption worse for everyone!" Her paper shows "responsible encryption" is anything but. And, even if implemented, it will result in far less access (and far more nefarious exploitation) than Rosenstein and Wray think. The first thing the paper does is try to pin down exactly what it is these two officials want -- easier said than done because neither official has the technical chops to concisely describe their preferred solutions. Nor do they have any technical experts on board to help guide them to their envisioned solution. (The latter is easily explained by the fact that no expert on cryptography has ever promoted the idea that encryption can remain secure after drilling holes in it at the request of law enforcement.) If you're going to respond to a terrible idea like "responsible encryption," you have to start somewhere. Pfefferkorn starts with an attempt to wrangle vague law enforcement official statements into a usable framework for a reality-based argument. Rosenstein’s remarks focused more on data at rest than data in transit. For devices, he has not said whether his preferred legislation would cover a range of devices (such as laptop and desktop computers or Internet of Things-enabled appliances), or only smartphones, as in some recent state-level bills. His speeches also leave open whether his preferred legislation would include an exceptional-access mandate for data in transit. As some commentators have pointed out, his proposal is most coherent if read to be limited in scope to mobile device encryption and to exclude data in transit. This paper therefore makes the same assumption. Wray, meanwhile, discussed both encrypted messaging and encrypted devices in his January 2018 speech. He mentioned “design[ing] devices that both provide data security and permit lawful access” and asked for “the ability to access the device once we’ve obtained a warrant.” Like Rosenstein, he did not specify whether his “responsible solution” would go beyond mobile devices. As to data in transit, he used a financial-sector messaging platform as a real-world example of what a “responsible solution” might look like. Similarly, though, he did not specify whether his “solution” would be restricted to only certain categories of data—for example, communications exchanged through messaging apps (e.g., iMessage, Signal, WhatsApp) but not web traffic (i.e., HTTPS). This paper assumes that Wray’s “solution” would, like Rosenstein’s, encompass encryption of mobile devices, and that it would also cover messaging apps, but not other forms of data in transit. Either way, there's no one-size-fits-all approach. This is somewhat ironic given these officials' resistance to using other methods, like cellphone-cracking tools or approaching third parties for data and communications. According to the FBI (in particular), these solutions "don't scale." Well, neither do either of the approaches suggested by the Rosenstein and Wray, although Rosenstein limiting his arguments to data at rest on devices does suggest a somewhat more scalable approach. The only concrete example given of how key escrow might work to access end-to-end encrypted communications is noted above: a messaging platform used for bank communications. An agreement reached with the New York state government altered the operation of the banking industry's "Symphony" messaging platform. Banks now hold encrypted communications for seven years but generate duplicate decryption keys which were held by independent parties (neither the banks nor the government). But this analogy doesn't apply as well as FBI Director Christopher Wray thinks it does. That agreement was with the banks about changing their use of the platform, not with the developer about changing its design of the platform, which makes it a somewhat inapt example for illustrating how developers should behave “responsibly” when it comes to encryption. Applied directly, it would be akin to asking cellphone owners to store a copy of a decryption key with an independent party in case law enforcement needed access to the contents of their phone. If several communication platform providers are also involved, then it becomes the generation of several duplicates. What this analogy does not suggest is what Wray and Rosenstein suggest: the duplication or development of decryption keys by manufacturers solely for the purpose of government access. These officials think this solution scales. And it does. But scaling increases the possibility of the keys falling into the wrong hands, not to mention the increased abuse of law enforcement request portals by criminals to gain access to locked devices and accounts. As Pfefferkorn notes, these are problems Wray and Rosenstein have never addressed. Worse, they've never even admitted these problems exist. What a quasi-escrow system would do is exponentially increase attack vectors for criminals and state-sponsored hacking. Implementing Rosenstein's suggestion would provide ample opportunities for misuse. Rosenstein suggests that manufacturers could manage the exceptional-access decryption key the same way they manage the key used to sign software updates. However, that analogy does not hold up. The software update key is used relatively infrequently, by a small number of trusted individuals. Law enforcement’s unlocking demands would be far more frequent. The FBI alone supposedly has been unable to unlock around 7,800 encrypted devices in the space of the last fiscal year. State and local law enforcement agencies, plus those in other countries, up the tally further. There are thousands of local police departments in the United States, the largest of which already amass hundreds of locked smartphones in a year. Wray's suggestion isn't any better. In fact, it's worse. His proposal (what there is of it) suggests it won't just be phone manufacturers providing key escrow but also any developer offering end-to-end encrypted communications. This vastly increases the number of key sources. In both cases, developers and manufacturers would need to take on more staff to handle law enforcement requests. This increases the number of people with access to keys, increasing the chances they'll be leaked, misused, or even sold. The large number of law enforcement requests headed to key holders poses more problems. Bogus requests are going to start making their way into the request stream, potentially handing access to criminals or other bad actors. While this can be mitigated with hardware storage, the attack vectors remain open. [A]n attacker could still subvert the controls around the key in order to submit encrypted data to the HSM [hardware security module] for decryption. This is tantamount to having possession of the key itself, without any need to attack the tamper-resistant HSM directly. One way for an attacker to get an HSM to apply the key to its encrypted data input is to make the attacker’s request appear legitimate by subverting the authentication process for exceptional-access demands. These are just the problems a key escrow system would produce on the supply side. The demand for robust encryption won't go away. Criminals and non-criminals alike will seek out truly secure platforms and products, taking their business to vendors out of the US government's reach. At best, forced escrow will be a short-term solution with a whole bunch of collateral damage attached. Domestic businesses will lose sales and other businesses will be harmed as deliberately-introduced holes in encryption allow attackers to exfiltrate intellectual property, trade secrets, conduct industrial espionage, and engage in identity theft. Wray and Rosenstein tout "responsible encryption." But their arguments are completely irresponsible. Neither has fully acknowledged how much collateral damage would result from their demands. They've both suggested the damage is acceptable even if there is only a minimal gain in law enforcement access. And they've both made it clear every negative consequence will be borne by device and service providers -- from the additional costs of compliance to the sales lost to competitors still offering uncompromised encryption. There's nothing "responsible" about their actions or their public statements, but they both believe they're 100% on the right side of the argument. They aren't and they've made it clear the wants and needs of US citizens will always be secondary to the wants and needs of law enforcement. DV.load("https://www.documentcloud.org/documents/4374283-2018-02-05-Technical-Response-to-Rosenstein-Wray.js", { width: 550, height: 560, sidebar: false, text: false, container: "#DV-viewer-4374283-2018-02-05-Technical-Response-to-Rosenstein-Wray" }); 2018 02 05 Technical Response to Rosenstein Wray FINAL (PDF) 2018 02 05 Technical Response to Rosenstein Wray FINAL (Text) Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Given Verizon's long-standing animosity to net neutrality (and openness and healthy competition in general), the company's acquisition of Tumblr created some understandable tension. Tumblr has been on the front lines of net neutrality support since around 2014 or so, with CEO David Karp stating in 2015 that the service wouldn't exist without net neutrality: "(Undermining net neutrality) would congeal the Internet into something stagnant, something where new players wouldn’t be able to join the game without having the funds to do so. I’m proud to have been able to turn a little side project into an engine of creativity for so many people. I don’t want to be among the last people able to do that." Karp resigned from the company last year, and numerous reports have indicated that while net neutrality advocacy remains strong among employees, the company itself has unsurprisingly lowered the volume of its support for net neutrality under new ownership by Verizon. That has resulted in a slow but steady departure of employees not thrilled to be under the "leadership" of one of the most anti-competitive (and occasionally comically delusional) companies on the tech policy front (former in-house counsel Ari Shahdadi being of particular note). Despite Verizon's ownership the company's net neutrality advocacy doesn't appear to be dead just yet. This week, the company joined net neutrality advocates' "Operation: OneMoreVote" campaign. As we've noted, activists are trying to use the Congressional Review Act to reverse the FCC net neutrality repeal. Under the CRA, Congress can reverse a regulatory decision within 60 days of it hitting the Federal Register with a majority vote. The GOP and Trump administration used this exact trick to kill consumer broadband privacy protections early last year. According to net neutrality advocacy group Fight for the Future, Tumblr will join Etsy, Reddit, Vimeo, Medium and other smaller companies in a February 27 effort to pressure lawmakers to support the effort in the Senate: "50 Senators have already come out in support of the CRA, which would completely overturn the FCC’s December 14 decision and restore net neutrality protections. Several Senators have indicated that they are considering becoming the 51st vote we need to win, but they’re under huge pressure from telecom lobbyists. Only a massive burst of energy from the Internet will get them to move." As noted previously, even if this effort passes the Senate it has an uphill climb in the House, where AT&T, Verizon and Comcast loyal politicians are in even greater supply. And even if the plan nabs the 218 House votes needed, it would still need to be signed by President Trump. And while activists believe Trump might bow to public pressure as part of his purported dedication to his special brand of "populism," that remains a bit of a pipe dream. That's not to suggest the effort is useless; it could go a long way toward forcing politicians to clearly document their disdain for the will of the public ahead of the looming midterms. All of that said, it's good to see the remaining folks at Tumblr still fighting the good fight, despite the fact that they're now owned by a company with a historically-miserable track record on consumer privacy, state rights, competition, honesty, transparency and the quest for a relatively healthy and open internet. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Digital cameras can store a wealth of personal information and yet they're treated as unworthy of extra protection -- both by courts and the camera makers themselves. The encryption that comes baked in on cellphones hasn't even been offered as an option on cameras, despite camera owners being just as interested in protecting their private data as cellphone users are. The Freedom of the Press Foundation sent a letter to major camera manufacturers in December 2016, letting them know filmmakers and journalists would appreciate a little assistance keeping their data out of governments' hands. Documentary filmmakers and photojournalists work in some of the most dangerous parts of the world, often risking their lives to get footage of newsworthy events to the public. They face a variety of threats from border security guards, local police, intelligence agents, terrorists, and criminals when attempting to safely return their footage so that it can be edited and published. These threats are particularly heightened any time a bad actor can seize or steal their camera, and they are left unprotected by the lack of security features that would shield their footage from prying eyes. The magnitude of this problem is hard to overstate: Filmmakers and photojournalists have their cameras and footage seized at a rate that is literally too high to count. The Committee to Protect Journalists, a leading organization that documents many such incidents, told us: "Confiscating the cameras of photojournalists is a blatant attempt to silence and intimidate them, yet such attacks are so common that we could not realistically track all these incidents. The unfortunate truth is that photojournalists are regularly targeted and threatened as they seek to document and bear witness, but there is little they can do to protect their equipment and their photos." (emphasis added) Cameras aren't that much different than phones, even if they lack direct connections to users' social media accounts or contact lists. We've covered many cases where police officers have seized phones/cameras and deleted footage captured by bystanders. The problem is the Supreme Court's Riley decision only protects cellphones from warrantless searches. (And only in the United States.) While one state supreme court has extended the warrant requirement to digital cameras, this only affects residents of Massachusetts. Everywhere else, cameras are just "pockets" or "containers" law enforcement can dig through without worrying too much about the Fourth Amendment. Unfortunately, it doesn't look like camera manufacturers are considering offering encryption. The issue still doesn't even appear to be on their radar, more than a year after the Freedom of the Press Foundation's letter -- signed by 150 photographers and filmmakers -- indicated plenty of customers wanted better protection for their cameras. Zack Whittaker of ZDNet asked several manufacturers about their encryption plans and received noncommittal shrugs in response. An Olympus spokesperson said the company will "in the next year... continue to review the request to implement encryption technology in our photographic and video products and will develop a plan for implementation where applicable in consideration to the Olympus product roadmap and the market requirements." When reached, Canon said it was "not at liberty to comment on future products and/or innovation." Sony also said it "isn't discussing product roadmaps relative to camera encryption." A Nikon spokesperson said the company is "constantly listening to the needs of an evolving market and considering photographer feedback, and we will continue to evaluate product features to best suit the needs of our users." And Fuji did not respond to several requests for comment by phone and email prior to publication. The message appears to be that camera owners are on their own when it comes to keeping their photos and footage out of the hands of government agents. This is unfortunate considering how many journalists and documentarians do their work in countries with fewer civil liberties protections than the US. Even in the US, those civil liberties can be waived away if photographers wander too close to US borders. If a government can search something, it will. Encryption may not thwart all searches, but it will at least impede the most questionable ones. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
It's been a minute since we've had to cover some trademark nonsense in the beer industry. In fact, several recent stories have actually represented what might be mistaken for a clapback on aggressive trademark protectionism in the alcohol space. But, like all great things, it just couldn't last. The specific tomfoolery that has brought reality crashing down on us once again comes out of Iowa, where Confluence Brewing has filed a trademark suit against Confluence On 3rd, which is an apartment complex that does not serve or make beer. Confluence Brewing Company on Friday filed a trademark lawsuit and motion for an injunction in Polk County District Court seeking to stop Confluence on 3rd apartments from using the name "Confluence." John Martin, president and co-founder of Confluence Brewing, said representatives of the company have tried to have discussions with Roers Companies, the Long Lake, Minnesota-based developer of Confluence on 3rd and several other Des Moines-area properties, but felt that their complaints were "falling on deaf ears." Those complaints appear to have centered around both companies using the word "confluence" and the potential public confusion that could cause. Which is really dumb. Because the brewery sells beer and the apartment complex rents apartments. A greater deviation in marketplaces I dare say could not be dreamed. And, yet, Confluence Brewing appears to have taken its opponent's refusal to negotiate on these invalid complaints as some sort of personal affront. After some back and forth about whether Confluence On 3rd might add the word "apartments" to the brand, it seems communication ceased. Jeff Koch, a principal at the parent company for Confluence On 3rd, had been a part of these conversations, but communication with him too was rebuffed. Which isn't to say that Koch won't explain to the media just how ridiculous this all is. The two companies have distinct names and operate in different business sectors, Koch said in his email to the Register. He said Confluence on 3rd has not experienced any confusion in the marketplace. "Confluence on 3rd was named solely on the historic relevance the city was founded at the confluence of the Des Moines and Raccoon rivers," Koch said in his email to the Register. "It is unique to Des Moines history and should be celebrated, not solely owned and dictated by one brewing company." This is essentially the localization of the aspect of trademark law that prevents a single company from locking up language globally. The whole point of trademark law is to prevent customer confusion within a given market, so that one brewer can't pass themselves off as another by having similar names and branding. That just isn't a concern here, given the disparity in the markets in which these two companies play. So, what got us to the point of having Confluence Brewing alleging true concern about public confusion? Beer coasters, largely. In April 2017, court documents show, Confluence Brewing called Roers Companies asking them to cease and desist their use of blue drink coasters promoting Confluence on 3rd at Des Moines bars. "I just think the bar coasters just seem a little bit blatant," Kerndt said. "I mean, they were being distributed at establishments that serve my clients’ beer." Emails between Kerndt and Koch show Confluence on 3rd had distributed all their coasters by the time of the April call and have not ordered any additional coasters since then. Which is entirely besides the point. Just because a company puts out the tchotchke of its choice doesn't suddenly put it in a competitive situation with anyone who makes those tchotchkes. If that were the case, the tchotchke market as a whole wouldn't... you know... exist. The only other type of confusion mentioned in the article for Confluence Brewing is that apparently people's Google map skills occasionally send them to the wrong Confluence company for the wrong item. Still, that isn't the type of confusion trademark law is supposed to prevent and it's easily remedied by directing the customer to another address. I will say that Confluence Brewing comes off as very earnest on the matter, so perhaps the folks there simply aren't aware of the intricacies of trademark law. Its legal team, on the other hand, certainly should be. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
With the event at Santa Clara earlier this month, and the companion essays published here, we've been talking a lot lately about how platforms moderate content. It can be a challenging task for a platform to figure out how to balance dealing with the sometimes troubling content it can find itself intermediating on the one hand and free speech concerns on the other. But at least, thanks to Section 230, platforms have been free to do the best they could to manage these competing interests. However you may think they make these decisions now, they would not come out any better without that statutory protection insulating them from legal consequence if they did not opt to remove absolutely everything that could tempt trouble. If they had to contend with the specter of liability in making these decisions it would inevitably cause platforms to play a much more censoring role at the expense of legitimate user speech. Fearing such a result is why the Copia Institute filed an amicus brief at the Ninth Circuit last year in Fields v. Twitter, one of the many "how dare you let terrorists use the Internet" cases that keep getting filed against Internet platforms. While it's problematic that they keep getting filed, they have fortunately not tended to get very far. I say "fortunately," because although it is terrible what has happened to the victims of these attacks, if platforms could be liable for what terrorists do it would end up chilling platforms' ability to intermediate any non-terrorist speech. Thus we, along with the EFF and the Internet Association (representing many of the bigger Internet platforms), had all filed briefs urging the Ninth Circuit to find, as the lower courts have tended to, that Section 230 insulates platforms from these types of lawsuits. A few weeks ago the Ninth Circuit issued its decision. The good news is that this decision affirms that the end has been reached in this particular case and hopefully will deter future ones. However the court did not base its reasoning on the existence of Section 230. While somewhat disappointing because we saw this case as an important opportunity to buttress Section 230's critical statutory protection, by not speaking to it at all it also didn't undermine it, and the fact the court ruled this way isn't actually bad. By focusing instead on the language of the Anti-Terrorism Act itself (this is the statute barring the material support of terrorists), it was still able to lessen the specter of legal liability that would otherwise chill platforms and force them to censor more speech. In fact, it may even be better that the court ruled this way. The result is not fundamentally different than what a decision based on Section 230 would have led to: like with the ATA, which the court found would have required some direct furtherance by the platform of the terrorist act, so would Section 230 have required the platform's direct interaction with the creation of user content furthering the act in order for the platform to potentially be liable for its consequences. But the more work Section 230 does to protect platforms legally, the more annoyed people seem to get at it politically. So by not being relevant to the adjudication of these sorts of tragic cases it won't throw more fuel on the political fire seeking to undermine the important speech-protective work Section 230 does, and then it hopefully will remain safely on the books for the next time we need it. [Side note: the Ninth Circuit originally issued the decision on January 31, but then on 2/2 released an updated version correcting a minor typographical error. The version linked here is the latest and greatest.] Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
We should all know by now that Facebook's reliability to handle copyright takedown requests is... not great. Like far too many internet platforms these days, the site typically puts its thumbs heavily on the scales such that the everyday user gets far less preference than large purported rights holders. I say "purported" because, of course, many bogus takedown requests get issued all the time. It's one of the reasons that relying on these platforms, when they have shown no willingness to have any sort of spine on copyright matters, is such a mistake. But few cases are as egregious as that of Leo Saldanha, a well-known environmental activist in India. When I tell you that Saldanha had a Facebook post taken down over a copyright notice, you must certainly be thinking that it had something to do with environmental activism. Nope! Actually, Saldanha wrote an all-text mini-review of an Indian film, Padmaavat, which was taken down after the distributor for the film claimed the post infringed on its copyrights. Here is the entirety of his post that was taken down. “In my view, #padmaavat is a bore fest. Halfway the movie was coming to an end, I felt. But then woke up to the cruel fact there still was the other half, and it involved the horribly cruel act of mass suicide. There is something horribly wrong about a film, when a man’s voice reasserts, that this gory act was to protect ‘Bharat’s swabhimaan, or something to that effect.” “The whole movie has one plot: of owning a woman. And all the characters conspire to subordinate women. True, this is a mythological account of times far in the past. But that one statement after the movie emphasises horrendous social mores of a medieval time and contextualises it as relevant to our times. Movies like these aren’t made with innocent intentions. Ranveer Singh is an incredible actor!” Seriously, that text is the entire post. And I have to say that it's quite tame as far as movie reviews go, not to mention fairly relevant from a movie critique standpoint. This wasn't someone dumping on the movie for fun. Saldanha had a well thought out point, no matter of whether anyone might agree with the content of his argument. Certainly nothing in that is copyright infringement by any measure. Yet Viacom 18 issued the takedown request and Facebook complied. Not only did it comply, in fact, but when Saldanha pushed back on Facebook trying to figure out what the hell was going on here, the only reply from the site was to warn of a perma-ban for repeated infringement and a recommendation to get Viacom 18's permission to post his review. Saldanha, to put it lightly, was not pleased with this response. Speaking to TNM, Saldanha says that he is deeply offended by the messages he received from Facebook and the allegation that he had violated anyone’s rights on any social media platform. “Anyone should be free to express in any form, their views about public matters. This includes the right to agree, disagree and the right to dissent. I also maintain that I have never used threatening language while offering my opinion on any issue that is public, or of any public person. The fact that Facebook pulled down my post is a serious issue. This only shows that Facebook leans towards those with financial muscle. Viacom18 clearly does not want critical views for the movie,” Saldanha says. There are all sorts of ways this could have happened -- but none of them make either Viacom 18 or Facebook look good. The most immediate theory would be Viacom 18 abusing copyright law to take down a negative review -- and Facebook assisting without a good reason. A more charitable (though still terrible) explanation would chalk it up to (once again) horrible automated systems flagging anything mentioning Padmaavat and falsely assuming it's infringing. And, again, Facebook assisted this without good reason. No matter what it's yet another example in our increasingly long list of cases where copyright is used for censorship. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
By now it has been pretty well established that the security and privacy of most "internet of things" devices is decidedly half-assed. Companies are so eager to cash in on the IOT craze, nobody wants to take responsibility for their decision to forget basic security and privacy standards. As a result, we've now got millions of new attack vectors being introduced daily, including easily-hacked "smart" kettles, door locks, refrigerators, power outlets, Barbie dolls, and more. Security experts have warned the check for this dysfunction is coming due, and it could be disastrous. Smart televisions have long been part of this conversation, where security standards and privacy have also taken a back seat to blind gee whizzery. Numerous set vendors have already been caught hoovering up private conversations or transmitting private user data unencrypted to the cloud. One study last year surmised that around 90% of smart televisions can be hacked remotely, something intelligence agencies, private contractors and other hackers are clearly eager to take full advantage of. Consumer Reports this week released a study suggesting that things aren't really improving. The outfit, which is working to expand inclusion of privacy and security in product reviews, studied numerous streaming devices and smart TVs from numerous vendors. What they found is more of the same: companies that don't clearly disclose what consumer data is being collected and sold, aren't adequately encrypting the data they collect, and still don't seem to care that their devices are filled with security holes leaving their customers open to attack. The company was quick to highlight Roku's many smart TVs and streaming devices, and the company's failure to address an unsecured API vulnerability that could allow an attacker access to smart televisions operating on your home network. This is one of several problems that has been bouncing around since at least 2015, notes the report: "The problem we found involved the application programming interface, or API, the program that lets developers make their own products work with the Roku platform. “Roku devices have a totally unsecured remote control API enabled by default,” says Eason Goodale, Disconnect’s lead engineer. “This means that even extremely unsophisticated hackers can take control of Rokus. It’s less of a locked door and more of a see-through curtain next to a neon ‘We’re open!’ sign." To become a victim of a real-world attack, a TV user would need to be using a phone or laptop running on the same WiFi network as the television, and then visit a site or download a mobile app with malicious code. That could happen, for instance, if they were tricked into clicking on a link in a phishing email or if they visited a site containing an advertisement with the code embedded." Roku was quick to issue a blog post stating that Consumer Reports had engaged in the "mischaracterization of a feature," and told its customers not to worry about it: "Consumer Reports issued a report saying that Roku TVs and players are vulnerable to hacking. This is a mischaracterization of a feature. It is unfortunate that the feature was reported in this way. We want to assure our customers that there is no security risk. Roku enables third-party developers to create remote control applications that consumers can use to control their Roku products. This is achieved through the use of an open interface that Roku designed and published. There is no security risk to our customers’ accounts or the Roku platform with the use of this API. In addition, consumers can turn off this feature on their Roku player or Roku TV by going to Settings>System>Advanced System Settings>External Control>Disabled." Roku fails to mention that doing so disables the ability for consumers to control the device with Roku's own app, taking away valuable functionality from the end user (something Consumer Reports mentions in its write up). And Roku doesn't even address the other complaints in the report, including concerns that streaming hardware and TV companies aren't making data collection and third-party sales clear, aren't clearly showcasing their privacy policies, and often don't let users opt out of such collection without losing functionality (much like the broadband ISPs and numerous services and apps these devices are connected to). Roku's response highlights the SOP approach (somebody else's problem) inherent in the IOT. As experts like Bruce Schneier have repeatedly noted, the tech industry is caught in a cycle of security dysfunction where nobody in the chain has any real motivation to actually fix the problem: "The market can't fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don't care. Their devices were cheap to buy, they still work, and they don't even know Brian. The sellers of those devices don't care: they're now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it's an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution." Schneier has repeatedly warned that we need cooperative engagement between governments, companies, experts and the public to craft over-arching standards and policies. The alternative isn't just a few hacks and embarrassing PR gaffes now and again. The influx of millions of poorly secured internet-connected devices (many of which are being automatically integrated into historically-nasty botnets) is a massive dumpster fire with the potential for genuine human casualties. It's easy to downplay these kinds of reports as just "a few minor problems with a television set," but that ignores the massive scope of the problem and the chain of security and privacy apathy that has created it. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Eric Goldman has come across an amazing pro se lawsuit [PDF] being brought by Nicholas C. Georgalis, an aggrieved social media user who believes he's owed an open platform in perpetuity, no matter what awful things he dumps onto service providers' pages. Oh, and he wants Section 230 immunity declared unconstitutional. Georgalis -- who sidelines as a "professional training professionals" when not filing stupid lawsuits -- is suing Facebook for periodically placing him in social media purgatory after removing posts of his. The lawsuit is heady stuff. And by "heady stuff," I mean we're going to be dealing with a lot of arguments about "sovereign rights" and "common law" and other related asshattery. Here's the opening. And it only gets better/worse from there: Now comes Plaintiff in suit in a court of law holding Facebook, Inc, Defendant, liable for willfully and with malice aforethought abrogating the priceless, God given, and thus inalienable right to free Speech, freedom of the press, freedom of religion, and the inalienable right to due process as guaranteed under the First and Fifth Amendment of the US Constitution respectively… [...] Plaintiff has standing through Defendant's repeated, prolonged, and unconstitutional blocking, and otherwise restricting with great aplomb, Plaintiff's ability to post his public comments which include, but are not limited to political opinions, philosophical observations, cultural observations, religious and scientific observations, and ideas on Defendant's publicly offered and universally available electronic platform. Such ideas and opinions are the private property of Plaintiff and not to be taken without due process by anyone including Defendant. This is the first time I've seen it argued that a private corporation's moderation decisions are a Fifth Amendment violation. Nonetheless, that's what we're dealing with. Georgalis has been temp-banned repeatedly and had posts removed. Well, let's take a look at the value Georgalis is adding to the Facebook platform. [O.J.] simpson - more proof that you can take a darkie out of the jungle but you can't take the jungle out of the darkie. [...] The Negroid evolved from lower animals while God created the Caucasoid and the Mongoloid evolved from the the Caucasoid. This find merely proves that the modern human visited Africa after the Creation. [...] I agree with the fact that this proposed union will taint the blood of the Royal Family. Miscegenation of this sort is akin to bestiality and thus an affront to God and to man. It is a threat to the survival of mankind. It must not stand. That's just a taste of the stuff that's still live. The lawsuit provides no detail on the posts Facebook has found offensive enough to remove. Georgalis is a Trump fan (he often refers to Trump as a capital-K "King") and an obvious bigot. That he receives a lot of direct moderation from Facebook isn't surprising. But Georgalis somehow believes deep in his sovereign, bigoted heart that Facebook should never take action against his account or Facebook posts. Here's how he explains it: Defendant has repeatedly denied and thus silenced Plaintiff ability to express his opinion on Defendant's publicly and universally available electronic forums which said opinions or comments Defendant disagrees or finds otherwise objectionable. Indeed Defendant has had the audacity to remove content posted by Plaintiff that Defendant did not like and thus erasing his written words, which are his property, from the sight and memory of man and the eyes of posterity. In so doing Defendant promotes his political, cultural, religious, philosophical, and economic opinions and ideas above all others and at the expense of Plaintiff's before the voting public… Good lord. Georgalis' Section 230 argument is just as bad as everything proceeding it. To sum up (because direct quoting would eat up pages of text and valuable real estate in readers' brains), Georgalis argues the immunity provided to service providers by Section 230 means they should never have to practice moderation. If they're immune from civil liability for end users' posts and actions, they shouldn't take action ever against third-party content. Georgalis targets Section 230 (2)(A) specifically -- the part that states ISPs will not be held liable for voluntary moderation efforts. In Georgalis' eyes, this elevates Facebook, et al into proxy censors of unpopular speech and somehow confers sovereign status to social media platforms. Georgalis' twisted legal argument comes to the conclusion that Section 230 is a violation of the "separation of powers enshrined in the enumerated powers of the US Constitution." Therefore: unconstitutional. And then the lawsuit goes on for another dozen pages, which deploy even more ridiculous arguments in an attempt to talk the court into viewing social media companies as extensions of the government. This becomes even more cognitively dissonant when Georgalis' favored political leader and party are running the country. His "king" is somehow using Section 230 to shut down opinions the government doesn't like, even if his opinions are probably of the sort the current government does like. Go figure. Total damages requested are $1 billion. Because you can't put a price tag on free speech. But if you do have to come up with an estimate, be insanely ridiculous about it. This damage award is buttressed by arguments that government taxation and liberal social policies have stifled the US economy so much Georgalis would be almost 80 times as wealthy as he currently is. Or something. The punitive damages are also supported by the fact that the statist and stoic philosophy and ideology and Keynesian economics promulgated by the Defendant as earnestly implemented by the US governance, education and other institutions since 1930 has led to tremendous economic losses. Exhibit 1 presents an analysis of the extent of the damage done to the US economy by the statist and stoic ideology espoused by Defendant wherein the 2016 GDP would have been almost 80 times larger in constant dollars. To add the final inadvertent lol to Georgalis' stupid lawsuit, he's appended a copyright notice to every page of the filing claiming no one can copy or reproduce it without his written permission. You'll note the lawsuit is linked above and embedded below. It's also quoted as extensively as I could stomach. So... ball's in your common law court, Nick. This suit won't go anywhere and it will add to the number of times the state has beaten Georgalis at his own game. Georgalis -- after losing a defamation lawsuit where he admitted the "libelous" statements made about him were factually true -- tried to have an Ohio court rule that summary judgment rulings were unconstitutional. Check this out: Ohio Civil Rule 56, Summary Judgment is unconstitutional because it deprives litigants, in the instant case Plaintiffs/Appellants, the constitutional right to trial by jury. Accordingly it violates Article 1.05 of the Ohio Constitution which plainly and unequivocally states that "The right of trial by jury shall be inviolate... " Ohio Civil Rule 56 endows powers upon the court that were never intended by the authors of the Ohio Constitution and the people of the State of Ohio who ratified the constitution. Summary judgment usurps the constitutional power of the jury to decide the facts in a case and instead unconstitutionally endows the judge with these powers, powers that the judge was never intended to have. Georgalis appears to believe he's continually being deprived of due process, even when he's engaged in civil litigation. The Fifth Amendment only covers criminal cases. He also believes the state should waste more money paying jurors, judges, and lawyers to ensure every ridiculous lawsuit gets presented to a jury. I can't see how he squares this with his small government assertions. (This filing probably has more to do with him being on the hook for appellate fees from his failed defamation lawsuit than any pure notion of constitutionality.) Then there's Georgalis' multiple battles with public entities over the release of certain information. It appears Georgalis has asked several states to hand over info on registered engineers, including their email addresses. His appeal to the state of Delaware was denied by the attorney general, who pointed out Georgalis hardly has the public interest in mind when demanding info on licensed engineers. Here, DAPE (Delaware Association of Professional Engineers) does not dispute that the right to privacy may be outweighed by the public interest in disclosure. Rather, DAPE argues that your request is a clear attempt to further your private commercial interest and in no way contributes to the public understanding of the activities of the government. DAPE notes that you are a developer and instructor of training courses, which you make available to professional engineers for a fee, and argues that you are using FOIA to obtain the email addresses of private citizens who meet the target audience of your product for sale. This suit will be tossed and undoubtedly Georgalis will mark this up to the government protecting its own -- even if the current government is the government he desires and "its own" is a private corporation that provides a social media service it can moderate however it wants without troubling the Constitution. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Give your IT career a boost with the Complete 2018 CompTIA Certification Training Bundle. 14 courses cover the most common hardware and software technologies in business, and the skills necessary to support complex IT infrastructures. The courses are designed to help you study for sitting the various CompTIA certification exams. The bundle is on sale for $59. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Back in December, right before the Waymo/Uber trial was supposed to begin (before it got delayed due to an unexpected bombshell about withholding evidence that... never actually came up at the trial), I had a discussion with another reporter about the case, in which we each expressed our surprise that a settlement hadn't been worked out before going to trial. It seemed as though part of the case was really about the two companies really disliking each other, rather than there being a really strong legal case. A year ago, when the case was filed, I expressed disappointment at seeing Google filing this kind of lawsuit. My concern was mainly over the patent part of the case (which were dropped pretty early on), and the fact that Google, historically, had shied away from suing competitors over patents, tending to mostly use them defensively. But I had concerns about the "trade secrets" parts of the case as well. While there does seem to be fairly clear evidence that Anthony Levandowski -- the ex-Google employee at the heart of the discussion -- did some sketchy things in the process of leaving Google, starting Otto, and quickly selling Otto to Uber, the case still felt a lot like a backdoor attempt to hold back employee mobility. As we've discussed for many years, a huge part of the reason for the success of Silicon Valley in dominating the innovation world has to do with the ease of employee mobility. Repeated studies have shown that the fact that employees can switch jobs easily, or start their own companies easily, is a key factor in driving innovation forward. It's the sharing and interplay of ideas that allows the entire industry to tackle big problems. Individual firms may compete around those big breakthroughs, but it's the combined knowledge, ideas, and perspective sharing that results in the big breakthroughs. And even though that's widely known, tech companies have an unfortunate history of trying to stop employees from going to competitors. While non-competes have been ruled out in California, a few years back there was a big scandal over tech companies having illegal handshake agreements not to poach employees from one another. It was a good thing to see the companies fined for such practices. However, the latest move is to use "trade secrets" claims as way to effectively get the same thing done. The mere threat of lawsuits can stop companies from hiring employees, and can limit an employee's ability to find a new job somewhere else. That should concern us all. However, in this lawsuit, everything was turned a bit upside down. Part of it was that there did appear to be some outrageous behavior by Levandowski. Part of it was that, frankly, there are few companies out there disliked as much as Uber. It does seem that if it were almost any other company on the planet, many more people would have been rooting against Google as the big incumbent suing a smaller competitor. But, in this case, many many people seemed to be rooting for Google out of a general dislike of Uber itself. My own fear was that this general idea of "Uber = bad" combined with "Levandowski doing sketchy things" could lead to a bad ruling which would then be used to limit employee mobility in much more sympathetic settings. Thankfully, that seems unlikely to happen. As Sarah Jeong (who's coverage of this case was absolutely worth following) noted, despite all the rhetoric, it wasn't at all clear that Waymo proved its case. Lots of people wanted Google/Waymo to win for emotional reasons, but the legal evidence wasn't clearly there. And now the case is over. As the trial was set to continue Friday morning, it was announced that the two parties had reached a settlement, in which Uber basically hands over a small chunk of equity to Waymo (less than Waymo first tried to get, but still significant). As Jeong notes in another article, both sides had ample reasons to settle -- but the best reason of all to settle is so that they can focus on just competing in the market, rather than the courtroom and in not setting bad and dangerous precedent concerning employee mobility in an industry where that's vital. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
You might recall that just a few years ago, HBO had to be dragged kicking and screaming into the modern era. For years the company refused to offer a standalone streaming TV service, worried that it would jeopardize the company's cozy promotional relationship with existing cable providers (who often all but give away the channel in promotions). As recently as 2013 Time Warner CEO Jeff Bewkes was claiming that such an offering would make "no economic sense." Why? Bewkes was worried that offering a standalone option would upset cable partners. At the time, those partners were already offering an HBO streaming app named HBO Go, but only if you signed up for traditional TV. This was art of the industry's walled garden "TV Everywhere" initiative, a misguided attempt at stopping cord cutters by only giving them innovative streaming services -- if they signed up for bloated, traditional television bundles. Bewkes was clearly worried at the time that being too damn innovative would upset industry executives and skew the company's balance sheets: "And we would do it if we thought it was in our economic best interest. At this point we don’t think it makes sense. We don’t think the target market is sufficiently large to be attractive at this point. So what we’re doing, and we think this is working pretty well — we’re working with the [pay TV operators] to increase the penetration of HBO Go in a mutually beneficial way." At the time we noted how HBO was letting fear trump innovation. The company was focusing so much on avoiding upsetting cable operators and worrying over the initial impact on the traditional cable TV cash cow, that it forgot that innovation often trumps the math. In reality, the math Bewkes was concerned about were performance and metrics built on a different, changing market that was on the way out. This kind of hesitation was initially great news for Netflix, whose CEO saw all of this coming long before HBO executives did: "The goal," says Hastings, "is to become HBO faster than HBO can become us." All the while, HBO and Time Warner's timidity and failure to listen to consumers resulted in many of its shows breaking piracy records. And while HBO couldn't be bothered to offer a legitimate standalone streaming alternative to piracy, it did spend a lot of time and money trying to derail these efforts, including "poisoning" seeded copies of HBO programs on BitTorrent and sending out oodles of nastygrams to ISPs. Other HBO executives, meanwhile, seemed to share the cable industry mindset that this whole cord cutting thing was just a temporary phenomenon that would blow over. HBO finally did buckle to offering a standalone streaming service (dubbed HBO Now) in 2014. Just a few years later and the service has just breached 5 million subscribers. And oh, the numbers HBO was so worried about are looking solid too, with HBO Now generating $19 million in revenues for the two months it aired of Game of Thrones Season 7. In this case it all worked out well for HBO, but the company could have enjoyed a much healthier head start if it company executives hadn't let fear trump natural evolution. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
For years, Manhattan DA Cy Vance has been warning us about the coming criminal apocalypse spurred on by cellphone encryption. "Evil geniuses" Apple introduced default encryption in a move likely meant to satiate lawmakers hollering about phone theft and do-nothing tech companies. In return, DA Cy Vance (and consecutive FBI directors) turned on Apple, calling device encryption a criminal's best friend. Vance still makes annual pitches for law enforcement-friendly encryption -- something that means either backdoors or encryption so weak it can be cracked immediately. Both ideas would also be criminal-friendly, but Vance is fine with sacrificing personal security for law enforcement access. Frequently, these pitches are accompanied with piles of uncracked cellphones -- a gesture meant to wow journalists but ultimately indicative of nothing more than how much the NYPD can store in its evidence room. (How many are linked to active investigations? How many investigations continued to convictions without cellphone evidence? Were contempt charges ever considered to motivate cellphone owners into unlocking phones? So many questions. Absolutely zero answers.) Will Vance be changing his pitch in the near future? Will he want weakened encryption safeguarding the NYPD's new tools? I guess we'll wait and see. (h/t Robyn Greene) Announced last year, the shift will see some 36,000 Nokia handsets replaced over the coming weeks. Initially purchased in 2014 as part of a $160 million program to modernize police operations, the Nokia phones running Windows Phone will be collected, wiped and sold back to the company. The move to iPhone 7 comes at no cost to the NYPD, as the handsets are considered upgrades under the agency's contract with AT&T. NYPD's rollout began last month when officers patrolling the Bronx and Staten Island swapped their obsolete Nokia smartphones for Apple devices. The department is handing out about 600 iPhones per day, according to NYPD Deputy Commissioner for Information and Technology Jessica Tisch. Let's get some crippled encryption for these guys. After all, their phones are manufactured by a company an FBI forensic detective called an "evil genius." Let's give malicious hackers an attack vector and street criminals more reasons to lift an iPhone off… well, anybody. By all means, let's give Vance what he wants and see if he hears anything back from his buddies in blue. This upgrade puts Vance in a lose-lose situation. If he stops calling for weakened encryption, he's a hypocrite. If he keeps calling for it, he's an asshole. But it should drive home an important point: encryption doesn't just protect the bad guys. It protects the good guys as well. Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
This week, our first place winner on the insightful side comes in response to the FCC's refusal to release certain records to a FOIA request. David noted that their reason — "to prevent harm to the agency" — was a big problem: It's not the job of the agency to prevent harm to the agency. It is the job of the agency to prevent harm to consumers. The ones paying its salaries. The FOIA act ensures that the employers of public officials have the means to make sure that the officials are doing the job they are being paid for by the people. If that would be detrimental to the good of the agency, the good of the agency is not aligned with the good of the people and salaries are obtained under fraudulous pretenses. Basically the answer is "Accountability? I beg your pardon, we are criminals!" In second place on the insightful side, we've got That One Guy with a response to the perennial and patently silly accusation that we are partisan hacks: If you didn't notice those sorts of articles cropping up as often when Obama was in office, perhaps it's because he wasn't engaging in such actions nearly as much as the current administration. He was criticized plenty when he did something wrong, if Trump and team get criticized more it's probably because they're doing more worthy of being criticized. For editor's choice on the insightful side, we start out with a comment from Roger Strong, who took the common light comparison between internet platforms and telephone companies and expanded on it, and its connection to one of the biggest myths about CDA 230: And there were court battles over exactly that, a century or so ago. The upshot was that the phone companies weren't liable. Online services had battles over this before the Communications Decency Act: In 1991 Cubby v. CompuServe ruled that CompuServe was merely a distributor, rather than a publisher. It was only liable for defamation if it knew, or had reason to know, of the defamatory nature of content in its forums. Since it wasn't moderating them, it didn't know. In 1995 Stratton Oakmont v. Prodigy went the other way. Prodigy moderated its forums, wanting a family-friendly environment. And so the court ruled that it was liable for what was posted. All of which could only mean one thing: Online services that chose to remain ignorant of their content were immune from liability. Those that moderated content, even in good faith, assumed full publisher liability. 1996's CDA 230 changed this. It's now safe to make good faith efforts to prevent criminal activity. Remove 230's protections, and we may go back to "ignorance is safety." Which would be a gift to the criminals, though those who want to kill CDA 230 will deny it. Next, though by now there is plenty of analysis of the Nunes memo from every angle, our second editor's choice is a nod to the anonymous commenter who provided one of the meatiest comments in the Techdirt discussion on the topic: Two key points from the Nunes memo In a stunning case of "own goal", the very end of the memo points out that the FBI had an investigation going long before the Steele memo (which isn't a memo at all, but a series of reports) came along. There are two reasons that the FBI paid attention to the Steele memo: (1) Steele has a reputation, a very good one, along with lots of experience and a sizable network of contacts (2) the contents of Steele documents matched things THEY ALREADY KNEW TO BE TRUE. The second point bears some explanation, because most of you don't have jobs that require the assessment of raw intelligence that comes from multiple people who may be omitting things or fabricating things or deliberately embedding some truth in a web of lies. The Steele memo is just that kind of raw intelligence, which is why -- if you take the time to read it -- you'll notice that Steele himself points out the possible presence of these issues. But when you get your hands on raw intelligence, and it gives you -- let's say -- 100 facts that you can check, and you find that 82 of them are true, 16 are unverifiable, and 2 are false -- then you have good reason to think that at least some of those 16 are worth further investigation because they may well turn out to be true. That's why you get a warrant: first, to re-re-re-verify the 82 and second, to find out about those 16. That's your JOB. Then of course you have to make some progress. Because if you don't, then you're not going to get multiple judges to renew your warrant multiple times. You might still not be able to check all 16 of those outstanding items, but if you can check 4 and make progress on 7, then you're getting there and it's reasonable for a judge to grant more time. If you can't check any of them, then maybe you're barking up the wrong tree and the warrant you seek isn't going to help anyway. One more thing. This isn't an edge case. Anyone who goes out of their way to pal around with intelligence agents from another country, even a friendly one, should expect that they're going to get surveilled: by us, by them, and by third parties who are of course interested in such things for reasons of their own. And anyone who openly brags about it should REALLY expect scrutiny. I have no great love for the FBI, but in this case, they did exactly what any sensible organization should do: start watching people who are heavily interacting with known agents of a hostile foreign power. Over on the funny side, instead of a first and second place winner, we have a rare perfect tie for the top spot, both from anonymous commenters! So in no particular order, we've got a response to apologists for the aggressive use of copyright on Martin Luther King Jr.'s works: Yes, without copyright protection there would have been no incentive for Dr. King to make speeches! Next we've got a reply to some rant or another by one of our loopier critics: The Techdirt logo doesn't have a gold border. Under the Banana Republic Second Circuit Court of Captain Kangaroo, I hereby place you on time out from your silly postings. For editor's choice on the funny side, we've got a one-two punch on our story about the FCC patting itself on the back for its incredibly stupid first year. An anonymous commenter chimed in early: Figuratively speaking. To be specific, they mandated that the FTC do the actual back-patting for them. Then XcOM987 added a further thought: If all goes to plan though the FTC won't have the authority to pat the FCC on the back That's all for this week, folks! Permalink | Comments | Email This Story

Read More...
posted 15 days ago on techdirt
Five Years Ago This week in 2013, the EU was taking a worryingly restrictive approach to trying to fix copyright licensing, France's Hadopi was trying to get the national library to use more DRM, and Japan was planning to seed P2P networks with fake files containing copyright warnings. The UK, on the other hand, rejected plans to create a new IP Czar, though a new copyright research center seeking to restore some balance to the overall debate was facing heavy opposition right out the gate. This was also the week that we wrote about the curious privacy claims about tweets from an investigative journalist named Teri Buhl, which quickly prompted a largely confused response and, soon afterwards, threats of a lawsuit. Ten Years Ago This week in 2008, the recording industry was continuing its attempts to sue Baidu and floating fun ideas like building copyright filters into antivirus software, while we were taking a look at the morass of legacy royalty agreements holding back the industry's attempts at innovation. A Danish court told an ISP it had to block the Pirate Bay, leading the ISP to ask for clarification while it considered fighting back. And Microsoft was doing some scaremongering in Canada in pursuit of stronger copyright laws. Fifteen Years Ago This week in 2013, Germany's patent office was seeking a copyright levy on all PCs, while the EU was mercifully pushing back on attempts to treat more infringement as criminal. One record label executive was telling the industry it had to embrace file sharing or die, but the company line was still the language of moral panic. Speaking of which, in an interview in the Harvard Political Review, Jack Valenti was asked about his infamous "Boston strangler" warning about VCRs — and proceeded to tell a bunch of lies to claim his warning was in fact apt. Permalink | Comments | Email This Story

Read More...
posted 15 days ago on techdirt
It is something of an unfortunate Techdirt tradition that every time the Olympics rolls around, we are alerted to some more nonsense by the organizations that put on the event -- mainly the International Olympic Committee (IOC) -- going out of their way to be completely censorial in the most obnoxious ways possible. And, even worse, watching as various governments and organizations bend to the IOC's will on no legal basis at all. In the past, this has included the IOC's ridiculous insistence on extra trademark rights that are not based on any actual laws. But, in the age of social media it's gotten even worse. The Olympics and Twitter have a very questionable relationship as the company Twitter has been all too willing to censor content on behalf of the Olympics, while the Olympic committees, such as the USOC, continue to believe merely mentioning the Olympics is magically trademark infringement. So, it's only fitting that my first alert to the news that the Olympics are happening again was hearing how Washington Post reporter Ann Fifield, who covers North Korea for the paper, had her video of the unified Korean team taken off Twitter based on a bogus complaint by the IOC: Twitter took down my video of the unified Korean team entering the stadium, on the IOC’s orders. pic.twitter.com/umffjawRqG — Anna Fifield (@annafifield) February 9, 2018 And Twitter complied even though the takedown is clearly bogus. Notice Fifield says that it is her video? The IOC has no copyright claim at all in the video, yet they filed a DMCA takedown over it. The copyright is not the IOC's and therefore the takedown is a form of copyfraud. Twitter should never have complied and shame on the company for doing so. Even more ridiculous: Twitter itself is running around telling people to "follow the Olympics on Twitter." Well, you know, more people might do that if you weren't taking down reporters' coverage of those very same Olympics. Oh, and it appears that Facebook is even worse. They're pre-blocking the uploads of such videos: I couldn’t even post it at all on Facebook pic.twitter.com/RNSzsxSthM — Anna Fifield (@annafifield) February 9, 2018 This is fucked up and both the IOC and Facebook should be ashamed. The IOC can create rules for reporters and can expel them from the stadium if they break those rules, but there is simply no legal basis for them to demand such content be taken off social media, and Twitter and Facebook shouldn't help the IOC censor reporters. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Online platforms have enabled an explosion of creativity — but the laws that make this possible are under attack in NAFTA negotiations. We recently launched EveryoneCreates.org to share the stories of artists and creators who have been empowered by the internet. This guest post from Public Knowledge's Gus Rossi explore's what's at stake. In the past few weeks, we at Public Knowledge have been talking with decision-makers on Capitol Hill about NAFTA. We wanted to educate ourselves on the negotiation process for this vital trade agreement, and fairly counsel lawmakers interested in its effects on consumer protection. And we discovered a thing or two in this process. It won’t surprise anyone that we don’t always agree with lobbyists for the big entertainment companies when it comes to creating a balanced copyright system for internet users. But some of the ideas these groups are advancing are widely misleading, brutally dishonest, and even dangerous to democracy. We wanted to share the two wildest ideas the entertainment industries are proposing in the new-NAFTA, so you can help us set the record straight before it’s too late: 1) Safe harbors enable child pornography and human trafficking. Outside specialized circles, common wisdom is that “safe harbors” are free get-out-of-jail cards that internet intermediaries like Facebook can use to avoid all responsibility for anything that internet users say or do in their services. Leveraging this fallacy, entertainment industry lobbyists are arguing that safe harbors facilitate child pornography and human trafficking. Therefore, the argument follows, NAFTA should not promote safe harbors. This is highly misleading. Safe harbors are simply legal provisions that exempt internet intermediaries such as YouTube or Twitter, and broadband providers such as Comcast or AT&T, from liability for the infringing actions of their users under certain specific circumstances. Without safe harbors, internet intermediaries would be obligated to censor and control everything their users do on their platforms, as they would be directly liable for it. Everything from social media, to internet search engines, to comments section in newspapers, would be highly restricted without some limitations on intermediary liability. The Digital Millennium Copyright Act (DMCA) and Section 230 of the Communications Decency Act (CDA 230) establish the two most important limitations for online intermediaries in US law. According to the DMCA, internet access providers (such as Comcast, AT&T, and Verizon) are not liable for the alleged copyright infringement of users on their networks, so long as they maintain a policy of terminating repeat infringers. Content hosts (such as blogs, image-hosting sites, or social media platforms) on the other hand, have to remove material if the copyright holder sends a takedown notice of infringement. CDA 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them directly responsible for what others say and do. The relevant safe harbor for the interests of the entertainment industries is the DMCA, not CDA 230. CDA 230 specifically excludes copyright from its umbrella. And DMCA is exclusively about copyright. It is incredible dishonest and shallow for these lobbyists to use the specter of child abuse to drum up support for their position on copyright in NAFTA. No one should try to obfuscate a complicated policy discussion by accusing their opponents of promoting child sex trafficking. 2) Exceptions and limitations to copyright are unnecessary in trade agreements. According to none other than the World Intellectual Property Organization, exceptions and limitations to copyright -- such as fair use -- exist “[i]n order to maintain an appropriate balance between the interests of rights holders and users of protected works, [allowing] cases in which protected works may be used without the authorization of the rights holder and with or without payment of compensation.” Without exceptions and limitations, everything from using a news clip for political parody, to sharing a link to a news article in social media, to discussing or commenting on just about any work of art or scholarship -- all could constitute copyright infringement. Yet, the entertainment industries are arguing that exceptions and limitations are outdated and unnecessary in trade agreements. They say that copyright holders should be protected from piracy and unlawful use of their works, claiming that any exceptions and limitations are a barrier to the protection of American artists. This is also wildly inaccurate. American artists and creators remix, reuse, and draw inspiration from copyrighted works every single day. If our trade partners don’t adopt exceptions and limitations to copyright, then these creators could be subject to liability when exporting their work to foreign countries. Exceptions and limitations to copyright are necessary both in the US and elsewhere. Our copyright system simply wouldn’t work without them, especially in the digital age. Conclusion: We need to set the record straight. For its political and economic importance, NAFTA could be be the standard for future American-sponsored free trade agreements. But NAFTA could have dramatic and tangible domestic consequences if it undermines safe harbors and exceptions and limitations to copyright. In the next policy debate around copyright infringement or intermediaries liabilities, the entertainment industries will point to NAFTA as an example of the US Government’s stated policy and where the world is moving. Furthermore, these lobbyists will have already convinced many on Capitol Hill that safe harbors enable child abuse and that fair use is unnecessary. The entertainment industries knows how to walk through the corridors of power day after day -- they’ve been doing so for well over a century. It’s not too late to fight back, set the record straight, and defend a balanced approach to copyright and consumer protections in NAFTA. You can start by contacting your representative. But the clock is ticking. Join Public Knowledge in the fight to keep the internet open for everyone. Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! » Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. This is the last in the series for now. We hope you enjoyed them. Patreon occupies a unique space among platforms with user generated content, which allows us to take a less automated approach to content moderation. As a membership platform that makes it easy for creators to get paid directly by their fans, we both host user generated content and act as a payment provider for creators. As a result, Patreon must have a higher bar for moderation and removal. It is not just individual content that is at risk, but potentially the creator's entire source of income. Our goal is to have a moderation and removal process that feels like working with an enterprise SaaS provider that powers your business, instead of a distrustful content hosting platform. This is a challenge on a platform with no vetting before a creator is able to set up a page and with large number of active creators. To achieve this goal, we treat our creators like partners and work through a series of escalating steps as part of our moderation and removal process. Patreon's Moderation and Removal Process We want to give creators on Patreon a kinder moderation experience, so our first step is to send them a personalized email to let them know that their content has violated our guidelines. This initial contact is primarily to establish a line of communication, educate the creator on guidelines that they may not have known about, and give them a sense of agency in dealing with our guidelines. The vast majority of the time this process results in a mutually beneficial outcome, as the creator wants to continue receiving their funding and we want to continue working with them. We sometimes even use this approach before a creator has violated our guidelines if we see them posting content or exhibiting behaviors that is likely to result in a violation. This early outreach helps to educate creators before it becomes a problem. When specific content poses an extreme risk, or when previous conversations fail to achieve the desired outcome, we then proceed to suspension. Our suspension state removes the page from public view and pauses all payment processing. It still allows the creator to log in to their page to make changes. The purpose of this feature is to give creators agency, because they can choose how to edit their pages to become compliant. We've heard from creators about how other moderation and removal processes are impersonal and inflexible. We want them to have the opposite experience when working with our team at Patreon. Creators are typically understanding of the requirement to change or remove specific content, but want to have control over how it is done and be part of the process. By disabling public access to the page we remove the risk the content poses to Patreon, and then allow the creators to control the moderation and removal process. We can be clear with creators what steps they need to take for the suspension to be lifted, but allow the creator to retain their agency. Sometimes we are forced to remove a page, cutting off funding of a creator. Typically this is reserved for the most egregious content risks or when we see repeated re-offense. Even in these situations, we provide a path forward for the creator by allowing them to create a new page. We give the creator a list of their patrons' emails and offer them the opportunity to start fresh. This gives creators the opportunity to create a page within our guidelines, but resets their page and their relationship with patrons. Permanent bans for individuals are the final possible step of this process, and the only bans we have issued so far have been for extreme situations where the creator's past behavior is a permanent risk, such as creators convicted of serious crimes. How Will it Work at Scale? Admittedly, Patreon has some unique advantages as a platform that allow us to spend much more time on our moderation and removal process than most platforms can on a per user basis. The first is that the value to the platform of each new user on a content hosting platform run by ads is lower compared to the value of each new Patreon creator with subscription payments. In fact the controversy of any individual creator is often a function of the amount of income they are making. If a creator isn't making much money on Patreon they represent a lower risk. It is often only when that creator's income becomes more significant that concerned individuals will report it and then we investigate to see whether it complies with our guidelines. The second is that Patreon isn't a discovery platform. Discovery platforms solve the problem of zero to fan, of introducing a creator's work to the world and getting fans as a result. Patreon solves the problem of fan to patron, of getting those fans engaged and willing to support a creator with direct-to-creator contributions, rather than generating user ad impressions that send a creator pennies from an ad-revenue share. This lack of focus on discovery means two things. First, we don't promote people landing on creator pages they don't already know about, massively de-risking the possibility that someone who is offended by any particular piece of content will be exposed to it. This means everyone landing on a Patreon page has generally already self selected to want to go there. Second, much of the actual content on Patreon lives behind a paywall, dramatically reducing the possibility of the content going viral, and again reinforcing the self selective nature of the people viewing that content on Patreon. These advantages mean we can continue to build and improve our moderation and removal process in a way that will scale without losing our human touch. We will always prioritize making sure creators can trust Patreon to run their creative business and have agency in the moderation and removal process. Colin Sullivan is Head of Legal for Patreon Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Most people don't understand the nuances of artificial intelligence (AI), but at some level they comprehend that it'll be big, transformative and cause disruptions across multiple sectors. And even if AI proliferation won't lead to a robot uprising, Americans are worried about how AI and automation will affect their livelihoods. Recognizing this anxiety, our policymakers have increasingly turned their attention to the subject. In the 115th Congress, there have already been more mentions of “artificial intelligence” in proposed legislation and in the Congressional Record than ever before. While not everyone agrees on how we should approach AI regulation, one approach that has gained considerable interest is augmenting the federal government's expertise and capacity to tackle the issue. In particular, Sen. Brian Schatz has called for a new commission on AI; and Sen. Maria Cantwell has introduced legislation setting up a new committee within the Department of Commerce to study and report on the policy implications of AI. This latter bill, the “FUTURE of Artificial Intelligence Act” (S.2217/H.4625), sets forth a bipartisan proposal that seems to be gaining some traction. While the bill's sponsors should be commended for taking a moderate approach in the face of growing populist anxiety, it's not clear that the proposed advisory committee would be particularly effective at all it sets out to do. One problem with the bill is how it sets the definition of AI as a regulatory subject. For most of us, it's hard to articulate precisely what we mean when we talk about AI. The term “AI” can describe a sophisticated program like Apple's Siri, but it can also refer to Microsoft's Clippy, or pretty much any kind of computer software. It turns out that AI is a difficult thing to define, even for experts. Some even argue that it's a meaningless buzzword. While this is a fine debate to have in the academy, prematurely enshrining a definition in a statute – as this bill does – is likely to be the basis for future policy (indeed, another recent bill offers a totally different definition). Down the road, this could lead to confusion and misapplication of AI regulations. This provision also seems unnecessary, since the committee is empowered to change the definition for its own use. The committee's stated goals are also overly-ambitious. In the course of a year and a half, it would set out to “study and assess” over a dozen different technical issues, from economic investment, to worker displacement, to privacy, to government use and adoption of AI (although, notably, not defense or cyber issues). These are all important issues. However, the expertise required to adequately deal with these subjects is likely beyond the capabilities of 19 voting members of the committee, which includes only five academics. While the committee could theoretically choose to focus on a narrower set of topics in its final report, this structure is fundamentally not geared towards producing the sort of deep analysis that would advance the debate. Instead of trying to address every AI-related policy issue with one entity, a better approach might be to build separate, specialized advisory committees based in different agencies. For instance, the Department of Justice might have a committee on using AI for risk assessment, the General Services Administration might have a committee on using AI to streamline government services and IT infrastructure, and the Department of Labor might have a committee on worker displacement caused by AI and automation or on using AI in employment decisions. While this approach risks some duplicative work, it would also be much more likely to produce deep, focused analysis relevant to specific areas of oversight. Of course, even the best public advisory committees have limitations, including politicization, resource constraints and compliance with the Federal Advisory Committee Act. However, not all advisory bodies have to be within (or funded by) government. Outside research groups, policy forums and advisory committees exist within the private sector and can operate beyond the limitations of government bureaucracy while still effectively informing policymakers. Particularly for those issues not directly tied to government use of AI, academic centers, philanthropies and other groups could step in to fill this gap without any need for new public expenditures or enabling legislation. If Sen. Cantwell's advisory committee-focused proposal lacks robustness, Sen. Schatz's call for creating a new “independent federal commission” with a mission to “ensure that AI is adopted in the best interests of the public” could go beyond the bounds of political possibility. To his credit, Sen. Schatz identifies real challenges with government use of AI, such as those posed by criminal justice applications, and in coordinating between different agencies. These are real issues that warrant thoughtful solutions. Nonetheless, the creation of a new agency for AI is likely to run into a great deal of pushback from industry groups and the political right (like similar proposals in the past), making it a difficult proposal to move forward. Beyond creating a new commission or advisory committees, the challenge of federal expertise in AI could also be substantially addressed by reviving Congress' Office of Technology Assessment (which I discuss in a recent paper with Kevin Kosar). Reviving OTA has a number of advantages: OTA ran effectively for years and still exists in statute, it isn't a regulatory body, it is structurally bipartisan and it would have the capacity to produce deep-dive analysis in a technology-neutral manner. Indeed, there's good reason to strengthen the First Branch first, since Congress is ultimately responsible for making the legal frameworks governing AI as well as overseeing government usage. Lawmakers are right to characterize AI as a big deal. Indeed, there are trillions of dollars in potential economic benefits at stake. While the instincts to build expertise and understanding first make for a commendable approach, policymakers will need to do it the right way – across multiple facets of government – to successfully shape the future of AI without hindering its transformative potential. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Another communications platform has published National Security Letters it has received from the FBI. Twilio -- a San Francisco-based cloud communications platform -- has published two NSLs freed from the confines of their accompanying gag orders. When Twilio receives requests that are issued without the review of a court, such as National Security Letters, Twilio will ask the agent to instead produce a court order or withdraw the nondisclosure component of the request. Twilio requested judicial review of the nondisclosure requirement, and as a result, received permission from the U.S. Department of Justice to publish two National Security Letters, in addition to the letters authorizing Twilio to do so. Twilio was also permitted to count the two National Security Letters in our semi-annual transparency report for the second half of 2017. Therefore, Twilio indicates receiving between 2 and 999 National Security Letters in the time range of July 1, 2017 through December 31, 2017. Twilio says it will continue to challenge the gag orders attached by default to FBI NSLs, which should result in more published NSLs in the future. The two posted by Twilio are fairly recent. Both were received in May of last year. Both also contain the FBI's response letter letting Twilio know the gag orders had been lifted. The first [PDF] of the two published lets Twilio know the FBI has agreed to lift the gag order. It also states the FBI is withdrawing its request for subscriber info. The second [PDF] is a little more interesting. The FBI agreed to lift the gag order, but requested Twilio give it a ring before notifying the affected customer. Please be advised that the FBI has reviewed the nondisclosure requirement imposed in connection with the NSL at issue and determined that the facts and circumstances supporting nondislosure under 18 USC 2709(c) no longer continue to exist. Consequently, the government is lifting the nondisclosure requirement imposed in connection with the NSL at issue… [T]he FBI also asks that Twilio notify Special Agent [redacted] of the FBI Cincinnati Field Office, in the event Twilio chooses to inform the subscriber of the account at issue regarding the NSL request or any of the information set forth in that request… This sounds like "assessment" stuff -- where the FBI rounds up everything it can obtain without a warrant to start building towards a preliminary investigation and possibly even the probable cause needed to continue pursuing a suspect. But the FBI office is seemingly willing to spook a subject in exchange for whatever minimal account info Twilio has on hand. That's a little strange, considering the gag order was lifted within a few months of the NSL being sent. The two published by Twilio are unlike the NSLs published elsewhere, some of which are closer to a decade old at this point. Whatever the case, it's more transparency from another service provider, adding to the body of public knowledge on the FBI's use of NSLs. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Sid Meier's Civilization needs little introduction, but the newest entry to the saga offers entirely new ways to engage with your world. The turn-based strategy franchise has sold over 35 million units worldwide since its creation, creating an enormous community of players attempting to build an empire to stand the test of time. Advance your civilization from the Stone Age to the Information Age by waging war, conducting diplomacy, advancing your culture, and going head to head with history's greatest leaders. There are five ways to achieve victory in Civilization VI. Which will you choose? Get started for $29.99. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Officials at ICE are pitching a dangerous idea to an administration likely to give it some consideration. It wants a seat at the grown-up table where it can partake of unminimized intel directly. Internal advocates for joining the America’s spy agencies—known as the Intelligence Community or the IC—focus on the potential benefits to the agency’s work on counterproliferation, money laundering, counterterror, and cybercrime. The official added that joining the IC could also be useful for the agency’s immigration enforcement work––in particular, their efforts to find and arrest undocumented immigrants with criminal arrest warrants (known in ICE as fugitive aliens). At this point, no one other than a few ICE officials really wants this to happen. Privacy and accountability activists say the last thing the White House should do is give the agency access to warrantless surveillance. ICE is a domestic enforcement agency and has no need to root around in foreign-facing data collections. The agency, however, feels foreign intel -- along with the unmentioned backdoor searches of domestic communications -- could aid it in tracking down drug traffickers, money launders, and various cybercriminals. But it shouldn't have direct access. Nor should it ever really need it. Information sharing has been expanded, thanks to the last president, which means ICE likely already receives second-hand info from other IC members like the DHS, FBI, and DEA. Former government officials are wary of the idea of direct intel access, noting that it would result in more complications, rather than better immigration and customs enforcement. Peter Vincent, ICE's general counsel under Obama, had this to say: Unlike most intelligence agencies, which focus on gathering information about America’s adversaries, ICE’s agents and officers deal with federal courts every day. If they use classified material to generate leads, that information could be inadmissible in court. Both the FBI and the Drug Enforcement Administration, which are in the Intelligence Community, deal with this issue. Adjusting would be a challenge for ICE. Vincent said this could create “many potential mission creep spectres, especially in this current climate,” and that he doesn’t think it would be necessary for ICE to join the Intelligence Community. We've seen how well dips into NSA stores has worked for these two law enforcement agencies. Parallel construction becomes the rule, rather than the exception, and cases are far more likely to be dropped if defense lawyers and judges start asking too many questions about presented evidence. Another former DHS intelligence official claims the added intel would do little more than "complicate the architecture," making it harder for ICE to do its job. If critical information needs to be shared with ICE, it could be done by bringing the head of ICE in on intel meetings, rather than adding ICE into the IC mix and adding yet another set of minimization rules to intel sharing. Bad idea or not, the push for ICE to join the Intelligence Community comes at the right time. While Trump has been extremely critical of other IC components -- particularly the FBI -- he's very fond of his domestic immigration enforcers, having given them free rein to enforce the law in whatever way they see fit. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
We've noted repeatedly how ESPN has personified the cable and broadcast industry's tone deafness to cord cutting and TV market evolution. The company not only spent years downplaying the trend as something only poor people do, it sued companies that attempted to offer consumers greater flexibility in how video content was consumed. ESPN execs clearly believed cord cutting was little more than a fad that would simply stop once Millennials started procreating, and ignored surveys showing how 56% of consumers would ditch ESPN in a heartbeat if it meant saving the $8 per month subscribers pay for the channel. As the data began to indicate the cord cutting trend was very real, insiders say ESPN was busy doubling down on bloated sports licensing deals and SportsCenter set redesigns. By the time ESPN had lost 10 million viewers in just a few years, the company was busy pretending they saw cord cutting coming all the while. ESPN subsequently decided the only solution was to fire hundreds of longstanding sports journalists and support personnel, but not the executives like John Skipper (since resigned) whose myopia made ESPN's problems that much worse. Fast forward to this week, when Disney CEO Bob Iger suggested that Disney and ESPN had finally seen the error of their ways, and would be launching a $5 per month streaming service sometime this year. Apparently, Iger and other ESPN/Disney brass have finally realized that paying some of the least-liked companies in America $130 per month for endless channels of crap has somehow lost its luster in the streaming video era: "There are signs that young people are coming into multi-channel television. People that were once called or thought to be cord-nevers are starting to adopt less expensive over-the-top packages," Iger said. Who knew? Did you know? I certainly didn't know. Bloomberg, meanwhile, informs us that the company's new service is "Iger's bet on the future": "If anything it points to what the future of ESPN looks like,” Iger said on a conference with investors. “It will be this app and the experience that it provides." But will it? There's every indication that ESPN's still only paying lip service to innovation. What consumers say they want is the ability to either avoid ESPN entirely, or buy ESPN the channel on a standalone basis. But it's important to point out that's not what ESPN is actually offering here. The new streaming service won't provide access to ESPN's existing channel lineup unless you have a traditional cable subscription. Without a traditional cable TV subscription, users of the app will be directed to other content they may or may not actually want: "The over-the-top service will roll out sometime in the spring, in tandem with a redesign of Disney's ESPN app. The over-the-top feature will be one part of that app, allowing users to watch live programming that will not otherwise be available on any of its channels. "The third feature is a plus service, we're calling it ESPN Plus, that will include an array of live programming that is not available — live sports, live sports events — not available on current channels," Iger said in an exclusive interview on CNBC's "Closing Bell." This is something ESPN already tried once with the launch of ESPN 360 (ultimately renamed just ESPN 3) years ago. That channel offered access to streaming sports content, but not any of the content anybody was actually interested in (unless you're really crazy for men's professional hopscotch). What users want is either the option to buy ESPN as a standalone channel, or to avoid ESPN entirely. What ESPN's offering is a streaming channel retread filled with content viewers probably didn't ask for. All, again, because ESPN is afraid of cannibalizing its traditional viewership numbers by trying something new. Admittedly ESPN is stuck between a rock and a hard place with no real easy options. ESPN currently makes $7.21 for each cable TV subscriber, many of whom pay for ESPN begrudgingly. Many industry insiders also have told me over the years that ESPN's contracts with many cable providers state that should ESPN offer its own streaming services, cable providers will no longer be bound by restrictions forcing them to include ESPN in their core lineups, which will only accelerate the number of skinny bundle options being offered without ESPN. In short, if ESPN offers a standalone version of ESPN, it only encourages customers to cut the cord and move to less expensive (and less profitable) alternatives. If ESPN doesn't give customers what they want, they'll cut the cord out of frustration. But if ESPN actually wants to be ready for the future, getting out ahead of the inevitable shift to streaming is the only real solution. Nobody said evolution would be painless or the traditional cable TV cash cow would live forever. ESPN has the option of getting out ahead of the trend, or playing from behind later on when the cord cutting trend shifts from a trickle to a torrent. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Last spring, Mike Masnick covered a completely fake court order that was served to Google to make some unflattering information disappear. The court order targeted some posts by a critic of a local politician. Ken Haas, a member of the New Britain (CT) city commission got into an online argument with a few people. When things didn't go his way, Haas played a dubious trump card: Several months ago, he got into a public controversy with local activist Robert Berriault — allegedly, when someone got in a Facebook political spat with Haas, he responded by writing, “You do know I have access to ALL city records, including criminal and civil, right???” Berriault took that to be a threat that Haas would misuse that access for political purposes and wrote about this on the New Britain Independent site, as well as in a not-much-noticed change.org petition calling for Haas’s removal. Following this, a delisting request was sent to Google with a supposed Connecticut federal court order attached. But the judge who signed it (John W. Darrah) didn't exist, the word "state" was misspelled (as "Sate"), and the docket number had already been used for another, existing civil case. Ironically, as Mike discovered, the docket number linked to an Illinois case (and there is a judge named "John W. Darrah" in Illinois) with some similar subject matter. It was a Prenda case and it involved, of all things, allegations of document forgery. That was a crazy case for a whole bunch of reasons, but it also got a ton of public attention. If you're going to fake a court document, maybe don't take one that is on a widely known case that got a lot of attention and is partly about forging legal documents? It's like trying to pick a disguise to be inconspicuous in committing a crime, and dressing up like Hitler. People are going to notice, and they're going to remember. Eugene Volokh, who first discovered the bogus takedown notice, obtained a copy of the police report linked to Haas' ill-advised social media foray. Apparently, Haas thought the police would hand him the online victory he had so miserably failed to obtain earlier. He reported Robert Berriault for harassment, only to be told nothing of the sort had taken place. Haas even admitted he had made the only threat -- the one where he implied he'd start dumping private records if his opponents didn't shut up. The police told Haas something and it made him very sad. I advised Haas that this was not a criminal act and that Berriault had every right to voice his opinion. I advised Haas that when you choose a career in politics that harsh criticism comes with the territory. Haas stated that he understood. The sender of the takedown notice with the bogus court order is unknown, but the most direct beneficiary of the removal of these links would be none other than Ken Haas. It could be some sketchy rep management firm did the dirty work, but Haas was likely involved somehow. Haas has apparently not let this go. Invaluable scourer of the Lumen database, Dean Jones, points out another bogus attempt to delist online content has been made -- targeting posts at both Techdirt and the Volokh Conspiracy. Now it emerges that an anonymous complainant has sent Google a defamation complaint requesting the removal of the two articles from its search results, citing a 1979 Supreme Court case concerning the public disclosure of personal information. Yes, this one is styled as a defamation takedown request, even though both articles are factual and contain receipts. The takedown notice cites a Supreme Court decision that has nothing to do with either post, despite the claims made in the notice. In 1979, the U.S. Supreme Court recognized an individual interest in the “practical obscurity” of certain personal information. The case was DOJ v. Reporters Committee for a Free Press. As well, this information is harmful to me as it concerns unfounded information which never resulted in prosecution. Not only has the dissemination of this information never been legitimate, but its internet referencing is clearly harmful to my reputation as my professional and personal surroundings can access it by typing my first and last names on the Internet. This case has to do with withheld documents and FOIA exemptions. It does not guarantee some right to "practical obscurity" for all Americans. In this case, the DOJ withheld rap sheets from release, arguing their release would be an "unwarranted invasion of privacy." The Supreme Court agreed, stating that the purpose of FOIA law was to permit examination of the government's inner workings, not subject private citizens' lives to greater scrutiny. A police report, obtained and posted by a private citizen (or even a news agency), is not a violation of this ruling. And it sure as hell isn't defamation. Haas is welcome to litigate the issue, but he'd have to sue the police department for releasing it. If Eugene Volokh acquired it from the other party in the complaint (who has a right to obtain a copy of the police report), then Haas has no one he can bring legal action against. The other party involved in a police report can do whatever they want with their copy, including sharing it with blogs detailing a politician's incredibly stupid actions. As Jones notes at Shooting the Messenger, Google was no more impressed with this latest attempt to vanish critical posts. The links remain live in Google's search engine results and Haas' reputation remains as mismanaged as ever. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Techdirt has been exploring the important questions raised by so-called "fake news" for some time. A new player in the field of news aggregation brings with it some novel issues. It's called TopBuzz, and it comes from the Chinese company Toutiao, whose rapid rise is placing it alongside the country's more familiar "BAT" Internet giants -- Baidu, Alibaba and Tencent. It's currently expanding its portfolio in the West: recently it bought the popular social video app Musical.ly for about $800 million: Toutiao aggregates news and videos from hundreds of media outlets and has become one of the world's largest news services in the span of five years. Its parent company [Bytedance] was valued at more than $20 billion, according to a person familiar with the matter, on par with Elon Musk's SpaceX. Started by Zhang Yiming, it's on track to pull in about $2.5 billion in revenue this year, largely from advertising. An in-depth analysis of the company on Ycombinator's site explains what makes this aggregator so successful, and why it's unlike other social networks offering customized newsfeeds based on what your friends are reading: Toutiao, one of the flagship products of Bytedance, may be the largest app you’ve never heard of -- it's like every news feed you read, YouTube, and TechMeme in one. Over 120M people in China use it each day. Yet what's most interesting about Toutiao isn't that people consume such varied content all in one place... it's how Toutiao serves it up. Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms. However, as people are coming to appreciate, over-dependence on algorithmic personalization can lead to a rapid proliferation of "fake news" stories. A post about TopBuzz on the Technode site suggests this could be a problem for the Chinese service: What's been my experience? Well, simply put, it's been a consistent and reliable multi-course meal of just about every variety of fake news. The post goes on to list some of the choice stories that TopBuzz's AI thought were worth serving up: Roy Moore Sweeps Alabama Election to Win Senate Seat Yoko Ono: "I Had An Affair With Hillary Clinton in the '70s" John McCain's Legacy is DEMOLISHED Overnight As Alarming Scandals Leak Julia Roberts Claims 'Michelle Obama Isn't Fit To Clean Melania's Toilet' The post notes that Bytedance is aware of the problem of blatantly false stories in its feeds, and the company claims to be using both its artificial intelligence tools as well as user reports to weed them out. It says that "when the system identifies any fake content that has been posted on its platform, it will notify all who have read it that they had read something fake." But: this is far from my experience with TopBuzz. Although I receive news that is verifiably fake on a near-daily basis, often in the form of push notifications, I have never once received a notification from the app informing me that Roy Moore is in fact not the new junior senator from Alabama, or that Hillary Clinton was actually not Yoko Ono's sidepiece when she was married to John Lennon. The use of highly-automated systems, running on server farms in China, represents new challenges beyond those encountered so far with Facebook and similar social media, where context and curation are being used to an increasing degree to mitigate the potential harm of algorithmic newsfeeds. The fact that a service like TopBuzz is provided by systems outside the control of the US or other Western jurisdictions poses additional problems. As deep-pocketed Chinese Internet companies seek to expand outside their home markets, bringing with them their own approaches and legal frameworks, we can expect these kind of issues to become increasingly thorny. We are also likely to see those same services begin to wrestle with some of the same problems currently being tackled in the West. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...