posted 16 days ago on techdirt
As you probably remember, a jury decided in Google's favor after a somewhat wacky trial that its use of some of the Java APIs was considered fair use. Oracle, of course, isn't going down quietly. It immediately asked the judge, William Alsup, to reject the jury's verdict, which he refused to do. Everyone expects that Oracle will appeal this as high as it can go, though its chances aren't great. In the meantime, though, Oracle isn't done trying every possible door at the district court level. Last week it simply asked for a new trial in what I can only describe as Oracle's sour grapes motion. It starts out by claiming that "the verdict was against the weight of the evidence" and thus a new trial is necessary. And then it whines about a whole bunch of other issues, including Google's plans to use Android on computers, meaning that the "harm" portion of the trial was unfairly limited to just tablets and phones. It also whines about certain limitations and exclusions of information it was not allowed to present. These are purely "waaaaah, we lost, fix it, waaaaaah" kinds of arguments. The court also excluded lots of Google evidence as well, and Oracle may not really want to revisit some of that either. You can read the full document below or at the link above, but analyzing all of it is pretty silly. It's strictly a sour grapes argument that is unlikely to go anywhere. At the same time, Oracle filed yet another motion for judgment as a matter of law... that also seems unlikely to go anywhere. Here, though, the argument is basically that the jury got fair use wrong. The argument here is pretty laughable. It goes through each of the four factors and argues why the jury got it wrong. Now, it's true, as some have argued, that a court can take the four fair use factors and basically come to any conclusion it wants, but it's hard to see Judge Alsup doing that here. It would be shocking to see him do so actually. And, rather than go through each argument, I'll just present the table of contents of Oracle's filing here so you can see how desperate the company is: Basically, Oracle is continuing to falsely pretend that fair use only applies to non-commercial use (it doesn't), and that creating something new with an API isn't transformative unless it's like artwork or something (this is wrong). Oracle's interpretation of fair use is not supported by the history or case law of fair use, and it would be shocking to see the court accept it here. Meanwhile, on the flip side, Google is looking to punish Oracle's lawyers and asking for sanctions against them for revealing in open court sensitive information that had been sealed by the court. On January 14, 2016, Oracle’s counsel Annette Hurst disclosed in open court representations of sensitive confidential financial information of both Google and third-party Apple Inc., as well as extremely confidential internal Google financial information.... After Ms. Hurst’s improper disclosures, Oracle and its counsel neither sought to remedy the effects of the disclosures nor acknowledged their wrongdoing. They instead refused to take responsibility for the disclosures, claimed they were inconsequential because Oracle hoped to use the information at trial (which it never did), and even argued that Google’s motion to seal the third party Apple information—which Judge Ryu subsequently granted,... —was “merely a delaying tactic.” ... Within days of the disclosures, and following Oracle’s failure to take remedial action, this information became headline news for major news outlets, at least one of which noted that, thanks to Ms. Hurst, the press could finally report on confidential information that had theretofore been only a subject of speculation. Oracle’s disclosures and its subsequent actions reveal a profound disregard for this Court’s Protective Order and for other parties’ confidential information. Google and third party Apple were harmed by Oracle’s counsel’s disclosure regarding the terms of a significant and confidential commercial agreement. Google believes it is important, both for this case and for other cases in this District, for the Court to make clear that Oracle’s counsel’s actions were improper, that Oracle’s excuses for the disclosures are invalid, and that Oracle’s failure, after the fact, to cooperate in remedying the disclosures was inconsistent with the Protective Order. Disclosing confidential/sealed information in court is a pretty big deal, though I have no idea how the court will rule on this matter. Either way, it's safe to say that there's little love lost between Google and Oracle (and their lawyers).Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
For a while now, some in the copyright community have been pushing for a copyright "small claims court" as an alternative to filing a federal lawsuit over copyright law. It's true that, especially for small copyright holders, the cost of filing a lawsuit may appear to be rather prohibitive. But it's not clear that a small claims court is the answer. A few years ago, we wrote about some potential concerns with such an approach, but have also admitted that if set up right, it could have some advantages. But that requires it be set up right. Unfortunately, a new bill has been introduced, by Rep. Hakeem Jeffries, along with Rep. Tom Marino, to officially set up such a system -- and it's done in a way that looks like it will not be well-designed, and instead will lead to a massive rush of small claims, especially by copyright trolls. The bill is called the Copyright Alternative in Small-Claims Enforcement Act of 2016, or CASE Act, and... it's got problems. The "good" news, if you can call it that, is that claims that would go before this appointed tribunal, made up of copyright lawyers recommended by the Register of Copyrights and appointed by the Librarian of Congress, would have much lower statutory damages availability than the federal courts. A copyright claim in a federal court has statutory damages up to $150k, for willful infringement. In the small claims system, the maximum statutory damages would be $15k. But, really, that's just half of today's official statutory damages -- because if there's no willful infringement, the Copyright Act puts a cap at $30k. In the small claims world, there's no option to claim willful infringement. Another potentially good feature is that this small claims setup would be able to hear two kinds of claims: the standard ones involving claims of someone violating one of the established rights under copyright law... but then also to hear cases about abusive DMCA notifications, under Section 512(f) of the DMCA. Of course, as we've noted in the past, the federal courts have effectively written 512(f) out of the law and refuse to punish those who file bogus DMCA notices. It's not at all clear how things would change here. The bill explicitly notes that the remedies for a 512(f) bogus DMCA notice claim would be limited "to those available under this chapter." But it's unclear if that really means that you could get $15k for a bogus DMCA filing. And that's because the section on statutory damages is clearly written only with people suing for copyright infringement in mind, and not people suing over bogus DMCA takedowns. For example, it notes that in order to qualify for the $15k maximum statutory damages, it only applies to "works timely registered." But... how does that make sense for 512(f) claims? In those cases, the question of whether or not the defendant timely registered a copyright makes no sense at all. If someone sends a bogus DMCA takedown over a copyright that doesn't exist or that they don't hold, why should its registration status matter? It's almost as if Rep. Jeffries (or the lobbyists who wrote this bill) only tossed in the part about 512(f) claims to appease people concerned about abusive DMCA takedowns, and then completely forgot about it after they included that. But the really big problem in my mind is that this seems likely to just be swamped by copyright trolls. We already see that they're flooding the federal court system, where multiple rulings against joinder (i.e., the ridiculous bundling of thousands of possible file sharers together) has meant that when trolls do sue, they're generally limited in how many people they can sue. Making the process cheaper, but still offering statutory damages amounts that can be quite scary to the average American, and that can still get the job done of scaring threatened users into paying up fines that are much smaller than the $15,000. And, yes, this small claims system will allow for discovery, which is the key feature that trolls want. They want to sue, and then get discovery where they can send demands to ISPs for names of subscribers based on IP addresses, and there doesn't appear to be anything in the bill to stop that. It does note that parties seeking discovery need to show "good cause" to enable discovery, but that may be a fairly low bar. It also notes that responding to discovery requests to non-parties in the dispute will be "voluntary" so perhaps ISPs will resist, but that's not certain. And thus, this three-panel board may find itself on the receiving end of a ton of ridiculous claims from trolls who have no intention of following through with the case. One would hope, with the federal court system's copyright docket currently overrun with trolling cases, that whoever drafted this law would have thought through a better plan to stop that from happening here. Another potential issue: the bill would let individuals go after not just actual infringers, but also service providers if they fail to follow through on a DMCA takedown notice. Basically, it exports the DMCA safe harbors to this small claims process as well, but that may mean that internet platforms are going to get dragged through this process that was meant to focus on small claims that could be easily adjudicated. There's also this oddity. After laying out the specific responsibilities of the three individuals who will handle all of these small claims cases, the bill notes: When not engaged in performing their duties as prescribed in this chapter, to perform such other duties as may be assigned by the Register of Copyrights. What, exactly, is that going to entail? Who knows how this will actually play out. A few years back, the UK introduced its own small claims copyright system. But I have no idea how it's doing. I haven't seen any numbers or indication of how widely it's used. Perhaps it works great and is a useful tool for dealing with small scale infringement issues. But I do worry about the way the bill is currently written and how it can be abused, especially by trolls who just want to pressure people into settling, and where the threat of a $15k award might be plenty.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
For the last two and half years or so, my Congressional Representative, Jackie Speier, has insisted that she was just about to introduce a federal law outlawing revenge porn. And then it wouldn't come. There would be an article saying it was almost ready... and then nothing. Months would go by, another article would appear... and then nothing. Finally, on Thursday, Speier introduced the bill, insisting that the delay was in convincing Silicon Valley companies to sign on to it. Of course, that leaves out the fact that the reason many refused to sign on was because previous iterations of the bill were incredibly problematic and almost certainly unconstitutional. With two and half years to work on it, however, the finally introduced bill, called the Intimate Privacy Protection Act of 2016, or IPPA, is not nearly as bad as it could have been, nor as bad as some of the suggestions passed around by those who "consulted" on drafting the bill. But that doesn't mean the bill isn't unconstitutional. Let's be clear: revenge porn is horrific. The creeps who put up revenge porn sites deserve to be shamed and mocked. The people who actually upload images to such sites or visit them are complete losers who need to get a life. But there are really important legal issues that come up when you try to outlaw such things, starting with the First Amendment. Yes, yes, as everyone will say, there are some exceptions to the First Amendment (though if you claim that shouting fire in a crowded theater is one of them, you're going to be mocked as well). But the exceptions to the first First Amendment are very narrowly prescribed by the Supreme Court, and they're much more narrow than most armchair lawyers believe. Looking over the list, it's pretty difficult to see how revenge porn fits. Next up, context matters a lot, and while the bill tries to take some of that into account, it's unclear if it actually succeeds. The bill has a vague and nearly totally undefined "public interest" exception -- but what does that actually include? That's left unclear. Remember last year when Lenny Kravitz accidentally exposed himself at a concert. Was everyone who passed around videos of images of that violating this new revenge porn bill? It would seem so. That would be "knowingly" using an "interactive computer service... to distribute a visual depiction of a person who is identifiable from the image itself or information displayed in connection with the image... of the naked genitals... of a person, with reckless disregard for the person's lack of consent to the distribution." Remember, tons of people were passing around that image and video last year. Should all of them face five years in prison plus fines? That seems... extreme. And extremely problematic. The ACLU has a rather simple request to fix this problem with the law: add an intent requirement, such that it only applies to those who "maliciously and intentionally invade another person's privacy." Even that may have some First Amendment issues, but supporters of the law refused to add an intent standard, claiming that such a standard would be too limiting, and wouldn't cover those who weren't motivated by "malice" but by money or fame. But, that's ridiculous. Any court would likely decide that setting up a revenge porn site for money was a form of malice. Thankfully, this version of the law says that it does not apply to online platforms, as defined by Section 230 of the Communications Decency Act, which is a big jump from where some of the crafters of this bill were a few years ago, in which they openly discussed undermining CDA 230 as a way to attack revenge porn. In the end, two and a half years of effort means that the bill isn't as horrible as some of the earliest suggestions, but it's still not clear that it's constitutional. It seems likely that the ACLU, and possibly others, will likely challenge this law should it pass and then I guess we'll find out what the courts actually think of it.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Learn how to edit your pictures like a pro with the $29 Ultimate Adobe Photo Editing Bundle. The 8 courses cover what you need to know about Adobe Photoshop and Lightroom, from becoming familiar with the interfaces and learning shortcuts to how to properly use layers and color correction to enhance your photos. Throughout the courses, you'll be learning from graphic designers and entrepreneurs about how to create images for various industries or your own social media, and you will have unlimited access to the instructional materials so you can reference them whenever you need them. You'll learn how to take your photos from merely Instagram good to frame-worthy. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Former Speaker of the House Newt Gingrich is making some news today for some silly remarks he made on Fox News last night in response to the attack last night in Nice, France. It comes right at the beginning of this video: All of the press -- for good reason -- is focusing on the first part of what he said, about deporting anyone "of Muslim background" (whatever that means) who "believes in Sharia." We'll skip over why this is totally clueless and unconstitutional, because plenty of other news sites are handling that. Instead, we'll move on to the second craziest thing he said, right after that first statement, which is something that fits much more with Techdirt's usual themes: Gingrich then claims two ridiculous things, each only slightly less ridiculous than his first statement: Anybody who goes on a website favoring ISIS, or Al Qaeda, or other terrorist groups, that should be a felony and they should go to jail. Any organization which hosts such a website should be engaged in a felon. It should be closed down immediately. Our forces should be used to systematically destroy every internet based source... He then goes on to note that if we can't take them off the internet, we should just kill them all. Which, you know, I'm sure won't anger any more people against us. Either way, this is idiotic. Merely visiting a website should put you in jail? What if you're a journalist? Or a politician? Or a researcher trying to understand ISIS? That should be a felony? That's not how it works. This also assumes, idiotically, that merely reading a website about ISIS will make people side with ISIS. It's also not, at all, how the law works. Same with the second part about it being a felony to host such content. We're already seeing lawsuits against social media sites like Facebook, Twitter and YouTube for hosting accounts from ISIS, and many are voluntarily taking down lots of those accounts. But making it a felony to keep them up? That's also not how the law works. Reacting to a very real problem with stupid unconstitutional solutions suggests someone who has no clue what he's doing.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
The other day I saw the following tweet and was very confused: That's a tweet from the UK's Intellectual Property Office (IPO) asking how does the UK's National Portrait Gallery in London "manage the copyright of national treasures like Shakespeare?" My initial response, of course, was "Wait, Shakespeare is in the bloody public domain, you don't have any copyright to manage!" It seems rather easy to manage "the copyright" of Shakespeare when there is none. But it turns out the link is... even worse. It's to a blog post on the IPO website eagerly praising the National Portrait Gallery for engaging in out-and-out copyright fraud. You'd think that the Intellectual Property Office would recognize this, but it does not. The tweet was doubly misleading, also, because it's not the works of William Shakespeare, but rather a portrait of William Shakespeare. The IPO then explains that the National Portrait Gallery is doing a brisk business licensing these public domain images, noting that: According to the gallery’s most recent statistics – the top five individual portraits licensed from its website are, in descending order: William Shakespeare, Richard III, Queen Elizabeth 1, King John and King Henry V. Obviously, all of those portraits were created centuries ago -- and are in the public domain. So why is the National Portrait Gallery licensing them at all? Well, I'm pretty sure this goes back to an issue we've written about quite some time ago. While in the US the caselaw is clear that merely digitizing public domain images does not create a new copyright, the National Portrait Gallery in London has always taken the opposite view. Back in 2009, we wrote about this very same museum threatening Wikimedia Commons for posting scans of high resolution images of public domain works that were downloaded from the NPG's website. But, here's the thing: just a few months ago, we wrote that the UK Intellectual Property Office (the same organization as above) had declared that scans of public domain works are also in the public domain in Europe (including the UK... for now at least). Here's what the UK's IPO said just months ago about copyright on scans of public domain images: However, according to the Court of Justice of the European Union which has effect in UK law, copyright can only subsist in subject matter that is original in the sense that it is the author’s own 'intellectual creation'. Given this criteria, it seems unlikely that what is merely a retouched, digitised image of an older work can be considered as 'original'. This is because there will generally be minimal scope for a creator to exercise free and creative choices if their aim is simply to make a faithful reproduction of an existing work. And, then, just months later, it's praising the National Portrait Gallery for falsely claiming copyright on such images and on then fraudulently profiting by licensing those images based on copyrights it doesn't hold? And the IPO's whole focus seems to be on just how much money can be made here. Read this and try not to feel sick: Online availability and easy access to images and other data are crucial aspects of modern museum and library curation. Huge databases of valuable information are available. Users need to know where to find these resources and how to use them without infringing copyright. Museums and libraries are developing strategies to improve access for researchers, to give access to businesses users who want to develop their own intellectual property (IP) by using cultural resources and develop their own brands and merchandising. Mathew Bailey, Rights and Images Manager at the National Portrait Gallery, balances the high wire between providing public access to our shared national assets and the need to encourage, develop and supply the creative economy with legally certain, quantifiable, marketable IP. The commodity he deals in – our heroes – couldn’t be more volatile. Then, to make matters even stupider, the UK's Intellectual Property Office notes that no one has any idea who created any of these top portraits: It’s no accident the names of the artists who painted the UK’s top five portraits are uncertain - King John looks like he’s just sat on a thistle, whereas Richard III only half fills his canvas. The lives of Richard III, King John and Henry V were all dramatised by Shakespeare during the reign of Elizabeth I. She was an image conscious monarch in the first age of mass communication and Shakespeare was her blockbuster dramatist. Shakespeare’s narratives add value and are the real reason why he, Richard, Elizabeth, John and Henry are still top of the portrait pops. It didn't occur to Dan Anthony, who wrote this article, to recognize the absurdity of the fact that the National Portrait Gallery is claiming a copyright in works where it doesn't even know the name of the artists who created those works? Holy crap. How does the UK IPO find these people? Oh, and then the article ends with this: All images © National Portrait Gallery, London. Bloody hell. They are not. They're in the public domain. Here's Shakespeare's portrait: You can find it, accurately listed as being in the public domain over at Wikipedia. Dan Anthony at the UK IPO is incredibly misinformed, and he should ask his own colleagues, who just months ago made it clear that such images were in the public domain, before posting such ridiculousness on the IPO's website.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
In a move that's sure to only increase the nation's respect for law enforcement, police departments have been arresting people for "threatening" social media posts. This activity follows the tragedy in Dallas, where five police officers were killed by a man armed with a rifle. Naomi LaChance of The Intercept has more details. Four men in Detroit were arrested over the past week for posts on social media that the police chief called threatening. One tweet that led to an arrest said that Micah Johnson, the man who shot police officers in Dallas last week, was a hero. None of the men have been named, nor have they been charged. Four more arrests have occurred elsewhere: Last weekend in Connecticut, police arrested Kurt Vanzuuk after a tip for posts on Facebook that identified Johnson as a hero and called for police to be killed. He was charged with inciting injury to persons or property. An Illinois woman, Jenesis Reynolds, was arrested for writing in a Facebook post that she would shoot an officer who would pull her over. “I have no problem shooting a cop for simple traffic stop cuz they’d have no problem doing it to me,” she wrote, according to the police investigation. She was charged with disorderly conduct. In New Jersey, Rolando Medina was arrested and charged with cyber harassment. He allegedly posted on an unidentified form of social media that he would destroy local police headquarters. In Louisiana, Kemonte Gilmore was arrested for an online video where he allegedly threatened a police officer. He was charged with public intimidation. Arresting people for speech is problematic, especially when the content of the communications doesn't rise to the level of a "true threat." The Supreme Court's Elonis decision says this distinction is important. It's not enough for a person or persons to subjectively view the communication as threatening. It needs to be viewed through the "reasonable person" lens. In these cases, perception appears to be everything. In the wake of the Dallas shooting, it's entirely normal for police officers to view the world a little differently. But this altered view -- one that's likely to be less skewed as time goes on -- can't be allowed to override the First Amendment and deprive individuals of their freedom to speak, not to mention their actual freedom. And just as certainly as law enforcement officers and officials are likely to view certain acts of blowhardiness as threatening in the immediate aftermath of a shooting targeting police officers, certain citizens are likely to vent their frustration and anger in particularly stupid ways, but without the intention or ability to carry out the perceived threat. Caution should be exercised on both sides of the interaction. However, those with the power to arrest, detain, and charge citizens for stupidity should be the more cautious of the two parties -- simply because they still hold the power, despite recent events. Those in power should also take care to carry this out with some sort of consistency, if that's the route they're choosing to take. It can't just be deployed against a bunch of nobodies who mouthed off about their contempt for law enforcement. If this is how it's going to be handled, those who speak with the same rhetoric in defense of law enforcement need to be held accountable. Former congressional rep Joe Walsh tweeted out that this was now "war on Obama" after the Dallas shootings and yet no one showed up at his door to arrest him for threatening the President. It's bad enough that power is being misused to silence criticism of law enforcement violence. It's even worse when this power is deployed in a hypocritical fashion. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
For many, many, many, many years, we've followed the rather crazy trials and tribulations of trying to get an international treaty signed to make it easier for the blind to access copyright-covered works (basically requiring countries to allow visually-impaired accessible versions to be reproduced and distributed). This is a treaty that people have tried to get in place for years and years and years, and it was blocked again and again -- often by legacy copyright industries who flat out refuse to support any kind of agreement that could be seen as strengthening user rights, which they see (ridiculously, and incorrectly) as chipping away at copyright. Amazingly, despite a last minute push by the MPAA and the Association of American Publishers, an agreement was reached and signed in 2013, called the Marrakesh Agreement. As we noted at the time, we fully expected the legacy copyright industries to refocus their efforts on blocking ratification in the US, and that's exactly what's happened. Hell, it took almost three years for the White House to finally send over the treaty to the Senate for ratification. That happened back in February, and they sent it together with another copyright-related treaty, the very troubling Beijing Treaty that creates an entirely new form of copyright for performers. So far, the Senate has moved on neither issue. However, to have the Marrakesh Treaty go into effect, it needed 20 countries to ratify it. And while the US has sat still, a few weeks ago, Canada became the 20th country to complete the ratification process. That means the agreement officially goes into effect on September 30th of this year. As the EFF noted: That’s another significant step for a treaty that has already made some important breakthroughs as the first international treaty focused exclusively on the rights of users of copyrighted material. Typically, if user’s rights are considered at all, they’re relegated to a section on “limitations and exceptions” or even as non-binding introductory text. In the Marrakesh Agreement, they are front and center. That post also noted that it should be a no brainer for the US to ratify this: United States law is already compliant with Marrakesh, but the government has not yet ratified the agreement. To do so requires a two-thirds vote from the Senate, and then a formal ratification from the President. Even at a time when passing legislation has proven exceedingly difficult, the Marrakesh Agreement would be a relatively easy and uncontroversial way to demonstrate leadership internationally and help bring books to millions of blind, visually impaired, and print-disabled people around the world. But why hasn't it happened? According to KEI, a group that fought hard for many years to get the agreement in place, the legacy copyright industries are working hard to block it in Congress: The Obama Administration has asked the US Congress to ratify the treaty... but Congress has yet to act, in large part due to lobbying from the Association of American Publishers.... The AAP lobbied the Administration for changes in the U.S. ratification package, and now have asked the Congress for changes that they failed to obtain in the interagency review process. The U.S. ratification already represents compromises, including limitations of exports to countries that have ratified the treaty, a provision that currently excludes all of Africa and Europe. But the AAP continues to press for additional amendments to the ratification legislation. This isn't a huge surprise, the AAP more or less admitted that they would refuse to support anything that established greater user rights, since that would be seen as an attack on "their rights." And, of course, the MPAA has also been working hard to block it, whining that this treaty could (gasp!) "affect other future treaties." All of that is just shameful. This is a no-brainer situation. Helping the visually impaired get access to these works is something everyone should agree is a good thing. And yet, because they're so scared of user rights expanding in any way at all, the legacy industries have to block it.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
At the heart of copyright and patents there is -- theoretically -- an implicit social contract. People are granted a time-limited, government-backed monopoly in return for allowing copyright material or patented techniques to enter the public domain once that period has expired. And yet copyright and patent holders often seem unwilling to respect the terms of that contract, as they seek to hang on to their monopolies beyond the agreed time in various ways. In the case of copyright, this has been through repeated extensions of copyright's term, even though there is no economic justification for doing so. In the realm of pharma patents, a number of techniques have been employed. One is "pay for delay." Another is the granting of "data exclusivity." And a third is the use of "evergreening." Techdirt wrote about the last of these a while back, so it's no surprise that companies have continued to "innovate" in this field since then. For example, AstraZeneca is trying to use a variant of evergreening for its anti-cholesterol pill Crestor. As a New York Times article explains: Crestor is the company’s best-selling drug, accounting for $5 billion of its $23.6 billion in product sales last year. About $2.8 billion in sales were in the United States, where the retail price is about $260 a month, according to GoodRx.com. Here's how AstraZeneca hopes to hold on to that lucrative market, even though its patent on the drug is now coming to an end, and it should be entering the public domain: The company is making a bold attempt to fend off impending generic competition to its best-selling drug, the anti-cholesterol pill Crestor, by getting it approved to treat [a] rare disease. In an unusual legal argument, the company says Crestor is entitled to seven years of additional market exclusivity under the Orphan Drug Act, a three-decade-old law that encourages pharmaceutical companies to develop treatments for rare diseases. In May, AstraZeneca won approval of Crestor to treat children with the rare genetic disease of homozygous familial hypercholesterolemia (HoFH ). That gives it an additional seven-year patent on the drug, but only for that particular -- very small -- market. However, the designation means that detailed prescription information about using Crestor to treat children in this way must not be included on the label. AstraZeneca's clever lawyers are trying to turn that into an extended patent for all uses of the drug: AstraZeneca immediately petitioned the F.D.A., arguing that if the correct dose for children with HoFH could not be on the generic label, then it would be illegal and dangerous to approve any generic versions for any use at all. That is because doctors might still prescribe the generic for children with HoFH and choose the wrong dose, posing "substantial safety and efficacy risks." Needless to say, AstraZeneca was only asking for generic versions to be kept off the market for another seven years for safety reasons, not because doing so would bring it billions more in exclusive sales to the general population. Of course. The New York Times article goes into more detail about the fascinating legal background to AstraZeneca's argument here, and notes that other drug companies have tried the same approach in the past, without success. Even if this particular ploy does fail again, we can be sure that pharma companies will be back with other sneaky ways of extending their patent monopolies -- implicit social contract be damned. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
In any Presidential campaign, there are always going to be partisan folks who side with one candidate or another. And they may campaign for the candidate they like. But, obviously, the Donald Trump phenomenon is a bit different this year. Even so, it's still pretty surprising to see a ton of big names in the tech space send an open letter to Trump insisting that he would be an absolute disaster for innovation and the tech industry. They're not arguing on the usual partisan issues here, but rather the fact that Trump's general zero-sum outlook on the world doesn't recognize how innovation works: Trump would be a disaster for innovation. His vision stands against the open exchange of ideas, free movement of people, and productive engagement with the outside world that is critical to our economy — and that provide the foundation for innovation and growth. Let’s start with the human talent that drives innovation forward. We believe that America’s diversity is our strength. Great ideas come from all parts of society, and we should champion that broad-based creative potential. We also believe that progressive immigration policies help us attract and retain some of the brightest minds on earth — scientists, entrepreneurs, and creators. In fact, 40% of Fortune 500 companies were founded by immigrants or their children. Donald Trump, meanwhile, traffics in ethnic and racial stereotypes, repeatedly insults women, and is openly hostile to immigration. He has promised a wall, mass deportations, and profiling. We also believe in the free and open exchange of ideas, including over the Internet, as a seed from which innovation springs. Donald Trump proposes “shutting down” parts of the Internet as a security strategy — demonstrating both poor judgment and ignorance about how technology works. His penchant to censor extends to revoking press credentials and threatening to punish media platforms that criticize him. This is a unique presidential campaign. And, as we've noted, Hillary Clinton's tech platform is not great either. But, at the very least, her platform's problem is that it's just a bunch of vague pronouncements designed for people to read into them what they will. The list of signatories on this letter is around 145 and there are some key names in the tech and policy world including Evan Williams (founder of Blogger, Twitter and Medium), Vint Cerf (basically invented the internet), Jimmy Wales (Wikipedia), Steve Wozniak (you know who he is) and more. There are also a ton of well known venture capitalists on the list and lots and lots of other entrepreneurial names that are well known inside Silicon Valley. This is a pretty huge list of people putting their name to a statement a lot stronger than one you'd normally see during a campaign season. Silicon Valley sort of has the reputation for more or less trying to ignore government. And while that's less true today than in the past, the one thing that does make Silicon Valley rise up is politicians looking to be doing something really stupid that's likely to harm innovation. And it appears that they see Donald Trump as just that kind of threat.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Europe only has a few days left to ensure that its member countries are actually protected by real net neutrality rules. As we've been discussing, back in October the European Union passed net neutrality rules, but they were so packed with loopholes to not only be useful, but actively harmful in that they effectively legalize net neutrality violations by large telecom operators. The rules carve out tractor-trailer-sized loopholes for "specialized services" and "class-based discrimination," as well as giving the green light for zero rating, letting European ISPs trample net neutrality -- just so long as they're clever enough about it. In short, the EU's net neutrality rules are in many ways worse than no rules at all. But there's still a change to make things right. While the rules technically took effect April 30 (after much self-congratulatory back patting), the European Union's Body of European Regulators of Electronic Communications (BEREC) has been cooking up new guidelines to help European countries interpret and adopt the new rules, potentially providing them with significantly more teeth than they have now. With four days left for the public to comment (as of the writing of this post), Europe's net neutrality advocates have banded together to urge EU citizens to contact their representatives and demand they close these ISP-lobbyist crafted loopholes. Hoping to galvanize public support, Sir Tim Berners-Lee, Barbara van Schewick, and Larry Lessig have penned a collective letter to European citizens urging them to pressure their constituents. The letter mirrors previous concerns that the rules won't be worth much unless they're changed to prohibit exceptions allowing "fast lanes," discrimination against specific classes of traffic (like BitTorrent), and the potential paid prioritization of select “specialized” services. These loopholes let ISPs give preferential treatment to select types of content or services, providing they offer a rotating crop of faux-technical justifications that sound convincing. The letter also urges the EU to follow India, Chile, The Netherlands, and Japan in banning "zero rating," or the exemption of select content from usage caps:"Like fast lanes, zero-rating lets carriers pick winners and losers by making certain apps more attractive than others. And like fast lanes, zero-rating hurts users, innovation, competition, and creative expression. In advanced economies like those in the European Union, there is no argument for zero-rating as a potential onramp to the Internet for first-time users. The draft guidelines acknowledge that zero-rating can be harmful, but they leave it to national regulators to evaluate zero-rating plans on a case-by-case basis. Letting national regulators address zero-rating case-by-case disadvantages Internet users, start-ups, and small businesses that do not have the time or resources to defend themselves against discriminatory zero-rating before 28 different regulators."Here in the States the FCC decided to not ban zero rating and follow this "case by case" enforcement, which so far has simply resulted in no serious enforcement whatsoever, opening the door ever wider to the kind of pay-to-play lopsided business arrangements net neutrality rules are supposed to prevet. Of course European ISPs have been busy too, last week falling back on the old, bunk industry argument that if regulators actually do their job and protect consumers and small businesses from entrenched telecom monopolies, wireless carriers won't be able to invest in next-generation networks. Those that care about net neutrality have just four days left to make their voices heard.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
We're back again with another in our weekly reading list posts of books we think our community will find interesting and thought provoking. Once again, buying the book via the Amazon links in this story also helps support Techdirt. This week, we've got an oldie, but a goodie, it's economist Michael Perelman's 2002 book Steal This Idea: Intellectual Property and the Corporate Confiscation of Creativity. And, I should note that despite the price being listed in the widget as $32 (at the time I type this), if you click through, there are used copies of the book currently on offer for $0.01. I will state upfront that there's actually plenty in this book that I end up disagreeing with, in that Perelman seems to reflexively dislike corporations and assume that corporations and the public are almost always at odds, which sometimes appears to cloud his thinking -- but that's only on the margins. For the most part, this book is an excellent exploration into how the concept of intellectual property has been abused over and over and over again to harm the public, rather than help them. The book is chock full of examples and history and details of how companies have turned intellectual property into a tool to hurt creators, inventors and the public. Some of the arguments you've probably heard before, but this book goes into great detail on some examples that you may have missed. If you're skeptical of the use of intellectual property, this book is for you. If you think intellectual property can do no wrong, this book is definitely for you. And, yes, it's a bit outdated today, but many of the examples still apply, and the general ideas and principles it discusses absolutely still apply.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Evidence acquired using Stingray devices has rarely been suppressed. This is due to the fact that it's almost impossible to challenge. The reason it's almost impossible to challenge is because the FBI -- and the law enforcement agencies it "partners" with (via severely restrictive nondisclosure agreements) -- will throw out evidence and let suspects walk rather than expose the use of IMSI catchers. Earlier this year, a Baltimore city circuit judge threw out evidence obtained with the Baltimore PD's cell tower spoofing equipment. And this was no run-of-the-mill drug bust. An actual murder suspect had evidence suppressed because of the BPD's warrantless deployment of a Stingray device. Without the use of the Stingray, the BPD would not have been able to locate the suspect's phone. And without this location, there would have been no probable cause to search the apartment he was in. You can't build a search warrant on illegally-obtained probable cause, reasoned the judge. Goodbye evidence. "I can't play the 'what if' game with the Constitution," [the judge] said, lamenting that it protects people from illegal searches even when the defendant is "likely guilty." Now, it's finally happened at a higher level. For the first time ever, a federal judge has suppressed evidence obtained by the warrantless use of a Stingray device. U.S. District Judge William Pauley in Manhattan on Tuesday ruled that defendant Raymond Lambis' rights were violated when the U.S. Drug Enforcement Administration used such a device without a warrant to find his Washington Heights apartment. The DEA had used a stingray to identify Lambis' apartment as the most likely location of a cell phone identified during a drug-trafficking probe. Pauley said doing so constituted an unreasonable search. "Absent a search warrant, the government may not turn a citizen's cell phone into a tracking device," Pauley wrote. The opinion [PDF] notes the DEA first tried to locate Lambis using cell site location info but found it wasn't precise enough. So, it deployed a Stingray to track him down, ultimately ending with a DEA tech roaming an apartment's hallways with a cell site simulator until Lambis was located. A few hours later, DEA agents showed up at the apartment, where Lambis' father allowed them to enter and Lambis himself consented to a search of his room and belongings. It's pretty tough to work your way backwards from a consensual search to a suppression order, but Lambis' lawyer was apparently up to the challenge. But -- as in the Baltimore PD case -- the DEA would never have known which apartment Lambis was located in without the use of a cell site simulator, and that's where it all falls apart for the DEA. The government tried to argue that two fairly recent cases involving thermal imaging (Kyllo) and drug dogs (Thomas) weren't applicable, as its "limited search" only disclosed information it could obtain without a warrant: cell site location. This is at odds with its reasons for deploying the cell site simulator -- which was that the CSLI it obtained wasn't precise enough to locate the suspect. The court finds the government's attempt to route around these two precedential decisions unavailing, noting that the use of a cell site simulator is actually more intrusive than the search methods used in the cases the DEA's lawyers wanted to have ignored. The Government attempts to diminish the power of Second Circuit precedent by noting that Thomas represents a minority position among circuit courts. But this Court need not be mired in the Serbonian Bog of circuit splits. An electronic search for a cell phone inside an apartment is far more intrusive than a canine sniff because, unlike narcotics, cell phones are neither contraband nor illegal. In fact, they are ubiquitous. Because the vast majority of the population uses cell phones lawfully on a daily basis, “one cannot say (and the police cannot be assured) that use of the relatively crude equipment at issue here will always be lawful.” The court also points out that the DEA -- for whatever reason -- obtained a warrant for the cell site location info. It wonders why it didn't bother to obtain a warrant for the cell site simulator deployment, seeing as it obtained a warrant for information it could have obtained without one. It also notes that a warrant for CSLI is not the same as a warrant for obtaining precise location info via the use of sophisticated electronic equipment. The fact that the DEA had obtained a warrant for CSLI from the target cell phone does not change the equation. “If the scope of the search exceeds that permitted by the terms of a validly issued warrant . . . , the subsequent seizure is unconstitutional without more.” Horton v. California, 496 U.S. 128, 140 (1990)... Here, the use of the cell-site simulator to obtain more precise information about the target phone’s location was not contemplated by the original warrant application. If the Government had wished to use a cell-site simulator, it could have obtained a warrant. And the fact that the Government previously demonstrated probable cause and obtained a warrant for CSLI from Lambis’s cell phone suggests strongly that the Government could have obtained a warrant to use a cell-site simulator, if it had wished to do so. The government also tried to use the Supreme Court's horrendous Strieff decision to save the evidence, but the court notes that the "temporal proximity" between the illegal Stingray search and the consensual search of the apartment was too close to allow the illegality of the original search to dissipate. The government also tried to use the Third Party Doctrine to salvage its warrantless search, but the court refuses to be sold on this bad idea. This Court need not address whether the third party doctrine is “ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks,” United States v. Jones, 132 S. Ct. 945, 957 (2012) (Sotomayer, J., concurring), because even under the historic framework of the doctrine, it is not available to the Government here. The doctrine applies when a party “voluntarily turns over [information] to third parties.” Smith v. Maryland, 442 U.S. 735, 744 (1979) [...] However, the location information detected by a cell-site simulator is different in kind from pen register information: it is neither initiated by the user nor sent to a third party. [...] Unlike CSLI, the “pings” picked up by the cell-site simulator are not transmitted in the normal course of the phone’s operation. Rather, “cell site simulators actively locate phones by forcing them to repeatedly transmit their unique identifying electronic serial numbers, and then calculating the signal strength until the target phone is pinpointed.” These points are good. The following, though, is even better. The court finds the government can't attempt to use the Third Party Doctrine when it has chosen to act as the "third party" in this equation. For both the pen register and CSLI, the Government ultimately obtains the information from the service provider who is keeping a record of the information. With the cell-site simulator, the Government cuts out the middleman and obtains the information directly. Without a third party, the third party doctrine is inapplicable. The Second Circuit has yet to make a decision on the reasonable expectation of privacy in CSLI. If this is appealed, it may finally have to handle that question. Then again, CSLI is only partially implicated here and it may be able to let the Fourth Amendment's reach be determined on a case-by-case basis until something more directly addressing the issue comes along. If nothing else, the ruling here should encourage more federal agencies operating in this district to get a warrant "just in case." Then again, the secrecy surrounding Stingray devices discourages the creation of paper trails, so it may be that the government will continue to roll the Fourth Amendment dice until a higher court tells them otherwise. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
We've been following an important case for the past few years about whether or not the US can issue a warrant to an American company for data stored overseas. In this case, Microsoft refused to comply with the warrant for some information hosted in Ireland -- and two years ago a district court ruled against Microsoft and in favor of the US government. Thankfully, the 2nd Circuit appeals court today reversed that ruling and properly noted that US government warrants do not apply to overseas data. This is a hugely important case concerning the privacy and security of our data. The key issue here is that the US government was basically on a fishing expedition for information hosted on Microsoft Outlook.com email servers. And there are a few really key issues, concerning jurisdiction, privacy and the all important difference between a subpoena and a warrant (something that many people seem to think are the same thing). Microsoft's own response to the lawsuit did a really good job explaining the issues and how the government wanted to pretend a warrant was a subpoena, and what that meant for the 4th Amendment: The Government cannot seek and a court cannot issue a warrant allowing federal agents to break down the doors of Microsoft's Dublin facility. Likewise, the Government cannot conscript Microsoft to do what it has no authority itself to do -- i.e., execute a warranted search abroad. To end-run these points. the Government argues, and the Magistrate Judge held, that the warrant required by ECPA is not a "warrant" at all. They assert that Congress did not mean "warrant" when using that term, but instead meant some previously unheard of "hybrid" between a warrant and subpoena duces tecum. The Government takes the extraordinary position that by merely serving such a warrant on any U.S.-based email provider, it has the right to obtain the private emails of any subscriber, no matter where in the world the data may be located. and without the knowledge or consent of the subscriber or the relevant foreign government where the data is stored. This interpretation not only blatantly rewrites the statute, it reads out of the Fourth Amendment the bedrock requirement that the Government must specify the place to be searched with particularity, effectively amending the Constitution for searches of communications held digitally. It would also authorize the Government (including state and local governments) to violate the territorial integrity of sovereign nations and circumvent the commitments made by the United States in mutual legal assistance treaties expressly designed to facilitate cross-border criminal investigations. If this is what Congress intended, it would have made its intent clear in the statute. But the language and the logic of the statute, as well as its legislative history, show that Congress used the word "warrant" in ECPA to mean "warrant," and not some super-powerful "hybrid subpoena." And Congress used the term "warrant" expecting that the Government would be bound by all the inherent limitations of warrants, including the limitation that warrants may not be issued to obtain evidence located in the territory of another sovereign nation. The Government's interpretation ignores the profound and well established differences between a warrant and a subpoena. A warrant gives the Government the power to seize evidence without notice or affording an opportunity to challenge the seizure in advance. But it requires a specific description (supported by probable cause) of the thing to be seized and the place to be searched and that place must be in the United States. A subpoena duces tecum, on the other hand, does not authorize a search and seizure of the private communications of a third party. Rather. it gives the Government the power to require a person to collect items within her possession, custody, or control, regardless of location, and bring them to court at an appointed time. It also affords the recipient an opportunity to move in advance to quash. Here, the Government wants to exploit the power of a warrant and the sweeping geographic scope of a subpoena, without having to comply with fundamental protections provided by either. There is not a shred of support in the statute or its legislative history for the proposition that Congress intended to allow the Government to mix and match like this. In fact, Congress recognized the basic distinction between a warrant and a subpoena in ECPA when it authorized the Government to obtain certain types of data with a subpoena or a "court order," but required a warrant to obtain a person's most sensitive and constitutionally protected information -- the contents of emails less than 6 months old. It was unfortunate that two judges at the district court level basically ignored this argument, so it's good to see the appeals court shoot it down completely. For the reasons that follow, we think that Microsoft has the better of the argument. When, in 1986, Congress passed the Stored Communications Act as part of the broader Electronic Communications Privacy Act, its aim was to protect user privacy in the context of new technology that required a user’s interaction with a service provider. Neither explicitly nor implicitly does the statute envision the application of its warrant provisions overseas. Three decades ago, international boundaries were not so routinely crossed as they are today, when service providers rely on worldwide networks of hardware to satisfy users’ 21st–century demands for access and speed and their related, evolving expectations of privacy. Rather, in keeping with the pressing needs of the day, Congress focused on providing basic safeguards for the privacy of domestic users. Accordingly, we think it employed the term “warrant” in the Act to require pre‐disclosure scrutiny of the requested search and seizure by a neutral third party, and thereby to afford heightened privacy protection in the United States. It did not abandon the instrument’s territorial limitations and other constitutional requirements. The application of the Act that the government proposes ― interpreting “warrant” to require a service provider to retrieve material from beyond the borders of the United States ―would require us to disregard the presumption against extraterritoriality that the Supreme Court re‐stated and emphasized in Morrison v. National Australian Bank Ltd., 561 U.S. 247 (2010) and, just recently, in RJR Nabisco, Inc. v. European Cmty., 579 U.S. __, 2016 WL 3369423 (June 20, 2016). We are not at liberty to do so. In the full discussion, the court points out where the lower court went wrong, thinking that thanks to the PATRIOT Act, a warrant could apply to the location of the service provider rather than the location of the server. But the court says that's clearly wrong, and the Congressional record makes it pretty clear that it was looking to apply the law just to the United States. As for the idea that the warrant was really a subpoena in disguise, the court says that's not how it works: Warrants and subpoenas are, and have long been, distinct legal instruments. Section 2703 of the SCA recognizes this distinction and, unsurprisingly, uses the “warrant” requirement to signal (and to provide) a greater level of protection to priority stored communications, and “subpoenas” to signal (and provide) a lesser level. 18 U.S.C. § 2703(a), (b)(1)(A). Section 2703 does not use the terms interchangeably. Id. Nor does it use the word “hybrid” to describe an SCA warrant. Indeed, § 2703 places priority stored communications entirely outside the reach of an SCA subpoena, absent compliance with the notice provisions. Id. The term “subpoena,” therefore, stands separately in the statute, as in ordinary usage, from the term “warrant.” We see no reasonable basis in the statute from which to infer that Congress used “warrant” to mean “subpoena.” [....] We see no reason to believe that Congress intended to jettison the centuries of law requiring the issuance and performance of warrants in specified, domestic locations, or to replace the traditional warrant with a novel instrument of international application. There is, of course, the further issue of Microsoft being a US company, but the court says that doesn't magically make its overseas data subject to these kinds of warrants, because the intent of the law is to protect the privacy of users' communications, not to make it easier for the government to snoop. The reader will recall the SCA’s provisions regarding the production of electronic communication content: In sum, for priority stored communications, “a governmental entity may require the disclosure . . . of the contents of a wire or electronic communication . . . only pursuant to a warrant issued using the rules described in the Federal Rules of Criminal Procedure,” except (in certain cases) if notice is given to the user.... In our view, the most natural reading of this language in the context of the Act suggests a legislative focus on the privacy of stored communications. Warrants under § 2703 must issue under the Federal Rules of Criminal Procedure, whose Rule 41 is undergirded by the Constitution’s protections of citizens’ privacy against unlawful searches and seizures. And more generally, § 2703’s warrant language appears in a statute entitled the Electronic Communications Privacy Act, suggesting privacy as a key concern. The overall effect is the embodiment of an expectation of privacy in those communications, notwithstanding the role of service providers in their transmission and storage, and the imposition of procedural restrictions on the government’s (and other third party) access to priority stored communications. The circumstances in which the communications have been stored serve as a proxy for the intensity of the user’s privacy interests, dictating the stringency of the procedural protection they receive—in particular whether the Act’s warrant provisions, subpoena provisions, or its § 2703(d) court order provisions govern a disclosure desired by the government. Accordingly, we think it fair to conclude based on the plain meaning of the text that the privacy of the stored communications is the “object[] of the statute’s solicitude,” and the focus of its provisions. The court goes on at length arguing that the Stored Communications Act's default is that communication privacy must be protected, and the exceptions are narrow. All three judges on the panel agreed, but one -- Judge Gerard Lynch -- wrote a concurrence that tries to undercut the strong 4th Amendment/privacy arguments in the overall opinion, basically noting that he believes the decision doesn't come down to 4th Amendment issues or privacy protection, but merely how Congress drew up the law in the Stored Communications Act -- and basically argues that if Congress doesn't like this result, it can just rewrite the law. It's also important to note that Rule 41 is the underpinning of much of this case, and that's the rule that the courts recently agreed to change to allow the DOJ more power to simply hack overseas servers. That shouldn't directly impact this particular case or similar situations, but does show how the DOJ is looking for ways to create endruns around limitations on domestic laws to try to get international data. Still, for now, this ruling is a surprisingly good one, reinforcing privacy protections in overseas data. Kudos to Microsoft for going to court over this when it would have been quite easy for it to just give in and hand over the data. I assume that the US government will seek to get this ruling overturned, either via an en banc hearing on the 2nd Circuit or going to the Supreme Court, so the case isn't over yet. But, as for right now, it's in a good position.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Windscribe is much more than a VPN. It’s a desktop application and browser extension that work in conjunction to protect your online privacy, unblock websites, and remove ads and trackers from your everyday browsing. With Windscribe, you’ll never mess with confusing settings and options menus again; just turn it on on your desktop once, and it’s good to go in the background forever. It is available for $39 from the Techdirt Deals Store. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
I've been seeing a few anti-encryption supporters pointing to a new ProPublica report on terrorists using encrypted communications as sort of proof of their position that we need to backdoor encryption and weaken security for everyone. The article is very detailed and thorough and does show that some ISIS folks make use of encrypted chat apps like Telegram and WhatsApp. But that's hardly a surprise. It was well known that those apps were being used, just like it's been well known that groups like Al Qaida were well aware of the usefulness of encryption going back many years, even predating 9/11. It's not like they've suddenly learned something new. So, the fact that they're now using tools like WhatsApp and Telegram is hardly a surprise. It also kinda highlights the idiocy of trying to backdoor American encryption. Telegram is not a US company and WhatsApp's encryption is based on the open source Signal protocol, meaning that any American backdoor encryption law isn't going to be very effective. But, really, what strikes me, from reading the whole article beyond the headline notion of "ISIS uses encryption," is that it lists example after example of the fact that folks in ISIS use encryption badly and often seem prone to revealing their information. This is not unique to ISIS. Lots of people are not very good about protecting themselves. Hell, I'm probably not very good about my own use of encryption. But, of course, I'm also not trying to blow things up or kill people. Either way, story after story after story in the article highlights the rather bumbling aspects of teaching ISIS supporters how and why to use encrypted communications and to avoid surveillance. My favorite example: On Jan. 4, 2015, an exasperated coordinator repeatedly explained to a befuddled caller with a Lebanese accent that he could only bring a basic cell phone to Syria, according to a transcript. “The important thing is that when you arrive in Turkey you have a small cell phone to contact me,” the coordinator said. “Don’t bring smart phones or tablets. OK, brother?” For the fourth time, the recruit asked: “So we can’t have cell phones?” “Brother, I said smart phones: iPhone, Galaxy, laptop, tablet, etcetera.” Sounding a bit like a frustrated gate agent at a crowded airport, the coordinator added: “Each of you can only bring one suitcase. If you come alone, just bring one suitcase. That is, a carry-on and one suitcase.” “I didn’t understand the last thing, could you explain?” “Brother, call me when you get to Turkey.” Then there was the case where someone planned a plot using an encrypted WhatsApp conversation, but police were already bugging the guy so they heard what he was saying anyway: In April, Italian police overheard a senior figure in Syria urging a Moroccan suspect living near Milan to carry out an attack in Italy, according to a transcript. Although the voice message had been sent through an encrypted channel, the Moroccan played it back in his car, where a hidden microphone recorded it. In the message, the unidentified “sheik” declared: “Detonate your belt in the crowds declaring Allah Akbar! Strike! (Explode!) Like a volcano, shake the infidels, confront the throng of the enemy, roaring like lightning, declare Allah Akbar and blow yourself up, O lion!” The suspects exchanged recorded messages over WhatsApp, an encrypted telephone application that is widely used in Europe, the Arab world and Latin America All of these examples keep making the same point that many people have been making for a long time. Yes, encryption hides some aspect of communications. That's part of the point. But the idea that it creates a "going dark" situation is massively exaggerated. There are many other ways to get the necessary information, through traditional surveillance and detective work. And the report suggests that's working. And the fact that many ISIS recruits are particularly unsophisticated in understanding how and when to use encryption only makes that kind of thing easier for people tracking them. In discussing the Paris attacks, for example, the article notes that while some of the attackers were told to use encryption, they didn't. Abaaoud’s operatives did not always follow security procedures, however. In June of last year, Turkish immigration authorities detained Tyler Vilus, a French plotter en route to Paris with someone else’s Swedish passport. Allowed to keep his cellular phone in a low-security detention center, Vilus brazenly sent an unencrypted text message to Abaaoud in Syria, according to a senior French counterterror official. “I have been detained but it doesn’t seem too bad,” the message said, according to the senior official. “I will probably be released and will be able to continue the mission.” Instead, U.S. spy agencies helped retrieve that text and French prosecutors charged Vilus with terrorist conspiracy. Anyway, it's no surprise that terrorists are going to use encryption. Of course they have been for over a decade and will continue to do so. The issue is that it's not as horrible as law enforcement is making it out to be. Just as plotters have always been able to plan in ways that law enforcement has been unable to track (such as discussing in person, in other languages, or through simple ciphers or codes). That's always happened and somehow we managed to get by. Yes, sometimes law enforcement doesn't get to know absolutely everything about everyone. And that's a good thing. And sometimes, yes, that means that terrorists will be able to plan bad things without law enforcement knowing it. But that's part of the trade-off for living in a free society.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Well known anti-Muslim troll Pamela Geller has teamed up with a group called the American Freedom Law Center to file one of the dumbest lawsuits we've ever seen. There's so much wrong here it's difficult to know where to start. Here's the lawsuit itself, which is filed against US Attorney General Loretta Lynch, even though Geller's own story about the lawsuit falsely claims she's suing Facebook. She's not. She's suing the US government because Facebook relies on Section 230 of the CDA in taking down some of her pages, and she claims, ridiculously, that Section 230 of the Communications Decency Act violates the First Amendment. The lawsuit is wrong on so many levels it's not even funny. Let's start with this, though -- Geller has long positioned herself as an extreme supporter of the First Amendment. And yet, she's now suing the government over CDA 230, a law which has probably done more than any other to guarantee that the First Amendment works on the internet. The lawsuit talks up the vast open and public forums of the internet, which is accurate, but then argues that because there's so much content online, Section 230 no longer applies. Unlike the conditions that prevailed when Congress first authorized regulation of the broadcast spectrum, the Internet can hardly be considered a “scarce” expressive commodity. It provides relatively unlimited, low-cost capacity for communication of all kinds. And then it gets to the crux of her argument: that popular internet forums are so important, no one should ever be barred from using them: Denying a person or organization access to these important social media forums based on the content and viewpoint of the person’s or organization’s speech on matters of public concern is an effective way of silencing or censoring speech and depriving the person or organization of political influence and business opportunities. Due to the importance of social media to political, social, and commercial exchanges, the censorship at issue in this Complaint is an unmatched form of censorship. Consequently, there is no basis for qualifying the level of First Amendment scrutiny that should be applied in this case. Except, this is really, really confused. Section 230 does not enable censorship. A private company is free to deny service or moderate its own services as much as it wants. That's their right as a private company. This is not a Section 230 issue at all. Geller and her lawyers are hellishly confused. Yes, Section 230's (c)(2) includes a so-called good-samaritan clause that basically says that a site does not take on new liability for taking down content, but that's separate from the issue of deciding to moderate content at all. Facebook can take down your page whenever it wants and it's not a First Amendment issue because Facebook isn't the government. And Section 230 has nothing to do with this at all, other than actually encouraging Facebook to leave up more speech since it's not considered liable for its users' speech. But Geller's lawyers don't seem to understand the law they're whining about. Section 230 permits content- and viewpoint-based censorship of speech. By its own terms, § 230 permits Facebook, Twitter, and YouTube “to restrict access to or availability of material that [they] consider[] to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Except that's not what Section 230 does at all. Companies are already permitted to do that because they're private companies. All Section 230 says is that in removing content, that doesn't mean those companies suddenly have liability for other content that they left up. Geller and her lawyers simply don't understand what Section 230 does and says. And yet they're suing over it. Section 230 confers broad powers of censorship, in the form of a “heckler’s veto,” upon Facebook, Twitter, and YouTube censors, who can censor constitutionally protected speech and engage in discriminatory business practices with impunity by virtue of this power conferred by the federal government. Except it does no such thing. Actually, Section 230 frequently protects against the heckler's veto because it makes it clear that platforms don't have to do anything and they're still protected from liability. This is actually a stronger protection against a heckler's veto than basically every other country in the world, most of which have a DMCA-like "notice and takedown" system, which does lead to protected speech being deleted. Section 230 protects against that, and a very confused Geller and her lawyers get this backwards. Section 230 is not tied to a specific category of speech that is generally proscribable (i.e., obscenity), nor does it provide any type of objective standard whatsoever. The statute does permit the restriction of obscenity, but it also permits censorship of speech that is “otherwise objectionable, whether or not such material is constitutionally protected.” 47 U.S.C. § 230(c)(2)(A). Further, the subjective “good faith” of the censor does not remedy the vagueness issue, it worsens it. This is just further confusion. The lawsuit is arguing over an issue as if this is about the government censoring speech, rather than private companies moderating speech -- something they've always been able to do, and which itself is protected by the First Amendment. This lawsuit is the legal equivalent of that idiot who claims that any company moderating content is violating the First Amendment. And to that, I've got an obligatory xkcd for you: From there, she goes on to complain about Facebook, Twitter and YouTube all taking down some of her content for terms of service violations, and insisting that Section 230 is to blame (it's not) and that her free speech rights have been denied (they have not). Section 230 of the CDA, facially and as applied, is a content- and viewpoint based restriction on speech in violation of the First Amendment. Section 230 of the CDA, facially and as applied, is vague and overbroad and lacks any objective criteria for suppressing speech in violation of the First Amendment. Section 230 of the CDA, facially and as applied, permits Facebook, Twitter, and YouTube to engage in government-sanctioned discrimination and censorship of free speech in violation of the First Amendment. None of that is a remotely accurate description of Section 230. Not even close. Geller's blog post, which falsely claims she's suing Facebook, rather than the US government, then just is a long extended whine about the fact that Facebook takes down her content when she violates its terms. Now, we've been vocal critics of Facebook's willingness to silence content and it's almost arbitrary decision-making in determining what content is appropriate for Facebook and what is not, but we'd never suggest that Facebook doesn't have a legal right to make those decisions. To make a bizarre First Amendment argument here, trying to link Facebook to the government via the free speech protections of Section 230, is nonsensical. It's almost as if her lawyers didn't even realize the argument they're really trying to make (which would also be a non-starter), that Facebook, Twitter and YouTube are de facto public spaces, and thus went with the even more bat-shit crazy misinterpretation of Section 230. As for her lawyers at the American Freedom Law Center (AFLC) they're just as confused in a blog post about the lawsuit: Section 230 provides immunity from lawsuits to Facebook, Twitter, and YouTube, thereby permitting these social media giants to engage in government-sanctioned censorship and discriminatory business practices free from legal challenge. It's not government sanctioned censorship. And the immunity it provides is just that these platforms don't lose their own protections against liability on the content they leave up just because they choose to take down some other content. Section 230 infers no special benefits to platforms to take down content. It just says that taking down content won't lose them other protections -- protections, I should remind you -- that help promote and protect free expression online. While there have been some questionable CDA 230 rulings lately, this one is an easy one. It should be laughed out of court pretty quickly on the basis of "did you even read the law you're suing over?"Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
As we've noted for some time, Comcast continues to expand the company's usage cap "trial" into more and more markets. As a clever, lumbering monopoly, Comcast executives believe if they move slowly enough -- consumers won't realize they're the frog in the boiling pot metaphor. But as we've noted time and time again, Comcast usage caps are utterly indefensible price hikes on uncompetitive markets, with the potential for anti-competitive abuse (since Comcast's exempting its own services from the cap). This is all dressed up as a "trial" where consumer feedback matters to prop up the flimsy narrative that Comcast is just conducting "creative price experimentation." Last week, Comcast quietly notified customers that the company's caps are expanding once again, this time into Chicago and other parts as Illinois, as well as portions of Indiana and Michigan. Comcast recently raised its cap from 300 GB to one terabyte in response to signals from the FCC that the agency might finally wake up to the problems usage caps create. And while that's certainly an improvement, it doesn't change the fact that usage caps on fixed-line networks are little more than an assault on captive, uncompetitive markets. To sell customers on the exciting idea of paying more money for the exact same (or less) service, a notice sent to Comcast users last week informs them they're lucky to now be included in the "terabyte internet experience," as if this is some kind of glorious reward being doled out to only the company's most valued customers. The company also tries to shine up its decision to start charging users $50 more per month if they want to avoid the cap as an act of altruistic convenience, and tries to make the caps seem generous by measuring them in terms of gaming hours and photos:"We know customers want a carefree online experience that doesn't require them to think about their data usage plan, and we offer a plan that does just that...What can you do with a terabyte? Stream about 700 hours of HD video, play more than 12,000 hours of online games, or download 600,000 high-res photos in a month."How generous. You can also check your email account 8 billion times under our totally unnecessary restrictions. As we've long noted, caps are solely about protecting legacy TV revenues from Internet video, while creating new ways (zero rating) to distort the level playing field. And as AT&T and Verizon give up on unwanted DSL customers and cable's broadband monopoly grows in many areas, this incredible "experience" will be headed in your direction sooner than you probably realize.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
The treatment of all things "cyber" by the government is incredibly inconsistent. Give someone a password so they can deface a website for 40 minutes and it's two years in jail. Doxx, SWAT, and cyberstalk multiple people and the best the court can do is two years minus time served. The end result is one year in prison for Mir Islam, who doxxed multiple celebrities and politicians, as well as called in fake threats that resulted in the swatting of at least nineteen people, including security researcher Brian Krebs, who uncovered Islam's doxxing tactics. Krebs' investigation of Islam and his abuse of free credit report services to obtain personal information on a variety of public figures led to the following: Peeved that I’d outed his methods for doxing public officials, Islam helped orchestrate my swatting the very next day. Within the span of 45 minutes, KrebsOnSecurity.com came under a sustained denial-of-service attack which briefly knocked my site offline. At the same time, my hosting provider received a phony letter from the FBI stating my site was hosting illegal content and needed to be taken offline. And, then there was the swatting which occurred minutes after that phony communique was sent. [...] Nearly a dozen heavily-armed officers responded to the call, forcing me out of my home at gunpoint and putting me in handcuffs before the officer in charge realized it was all a hoax. The response to the hoax call on Krebs' residence was, by comparison, minimal. Islam also called in a fake active shooter report at the University of Arizona campus. This was apparently in retaliation to a cheerleader's failure to realize Islam's cyberstalking was just another way of saying "I love you." A woman representing an anonymous “Victim #3” of Islam’s was appearing in lieu of a cheerleader at the University of Arizona that Islam admitted to cyberstalking for several months. When the victim stopped responding to Islam’s overtures, he phoned in an active shooter threat to the local police there that a crazed gunman was on the loose at the University of Arizona campus. According to Robert Sommerfeld, police commander for the University of Arizona, that 2013 swatting incident involved 54 responding officers, all of whom were prevented from responding to a real emergency as they moved from building to building and room to room at the university, searching for a fictitious assailant. Sommerfeld estimates that Islam’s stunt cost local responders almost $40,000, and virtually brought the business district surrounding the university to a standstill for the better part of the day. Worse, some of Islam's swatting efforts and cyberstalking occurred while he was "cooperating" with federal prosecutors following his arrest for attempting to sell stolen credit cards to undercover agents. Federal prosecutors wanted to see Islam jailed for nearly four years -- towards the upper reaches of the mandatory sentencing guidelines. Instead, the judge handed down a sentence of two years. Islam has been in federal custody since July 2015 and that time is being credited towards his sentence, meaning it will only be another year at the most before Islam is free again. The credit for time served makes sense and the departure from the upper limits of the guidelines is something I would be extremely hesitant to suggest is a bad thing. Prosecutors wanted a much longer sentence, and the allegations here would seem to justify a lengthier imprisonment for Islam. The problem with the government's fear of anything cyber-related is that the default mode for prosecutors is almost always the upper reaches of the sentencing guidelines, even when the severity of the criminal activity doesn't appear to warrant this sort of punitive sentencing. The government sought a longer sentence for Matthew Keys' minimal participation in a 40-minute headline alteration at a news website. Someone who endangered lives of dozens of people by sending heavily-armed law enforcement officers after them -- in addition to doxxing a large number of public figures and participating in multiple cyberstalkings -- was apparently only deemed dangerous enough to warrant a 46-month sentence, as compared to the 60 months sought in the Keys case. Then there's this: Judge Moss, in explaining his brief deliberation on arriving at Islam’s two-year (attenuated) sentence, said he hoped to send a message to others who would endeavor to engage in swatting attacks. Swatting has the potential to kill people, something clearly not reflected by the "severity" of this sentence. As Brian Krebs points out, it does send a message, although certainly not the one the judge intended. It says you can endanger the lives of others without seriously affecting your own freedom. It also sends the message that the government -- as a whole -- will remain incoherent and inconsistent in its handling of cybercrime. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Techdirt has run many articles about China's direct assault on Internet freedom. Indeed, its attempts to muzzle online dissent are so all-encompassing you might think it has run out of things to censor. But you'd be wrong: China is now reining in games for mobile phones, as a post on Tech in Asia explains: A little over a month ago, Chinese censorship bureau SAPPRFT announced new rules that require every mobile game launched in China to be pre-approved by SAPPRFT (already-launched games will have to get retroactive approval before the grace period ends in October). Before the rules had even gone into effect, developers and analysts alike were predicting things could be bad, and that the rules might dismantle China’s indie mobile gaming scene entirely. Making sure games aren't seditious in any way might be expected, but there's a rather weird twist to this latest move: One developer's rant has gone viral in the Chinese web after their game was supposedly rejected by SAPPRFT for containing English words. Not offensive English words, mind you, but completely innocuous ones like "mission start" and "warning." "I'm really fucking surprised," wrote the developer of the rejection. Another developer confirmed that their game had been rejected for the same reason: including English words like "go" and "lucky." SAPPRFT's rules also forbid the use of traditional Chinese characters. The use of English here is hardly subversive. The words in question form part of a global gaming language that has little to do with either the US or the UK. The ban on traditional Chinese characters, as opposed to the simplified ones that are generally used in China, is more understandable: Taiwan still uses the traditional form, so their inclusion might be seen as some kind of subliminal political statement. The consequence is likely to be fewer games from smaller Chinese software companies, who are less able to meet the stringent new demands. As the Tech in Asia post rightly points out: We could be facing a future where China's entire mobile game catalogue consists only of the games produced by powerful corporations like Tencent and Netease, with no room for startups and indies. And that is probably the real reason for this latest move: big companies tend to be far more willing to toe the government line than smaller independents, since they have far more to lose. So, as with other apparently arbitrary moves, the latest unexpected clampdown by the Chinese government looks to be yet another example of its shrewd and subtle control of the online world. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
A few years ago, I got to travel to Moscow to present some of our research at an event. Having heard more than a few stories about internet access issues in Russia, before going I made sure that I had three separate VPNs lined up in case any of them were blocked. I ended up using Private Internet Access -- which is already quite well-known and reliable. That's my regular VPN, but I had been worried that maybe it wouldn't work in Moscow. I was wrong. It worked flawlessly. But apparently that's no longer the case. Just after Russia's new surveillance bill passed, complete with mandates for encryption backdoors and data retention (along with a demand that all encryption be openly accessible for the government within two weeks), apparently Russian officials seized Private Internet Access's servers in Russia, causing the company to send an email to all its subscribers, announcing what happened, what it was doing to fix things... and also that it was no longer doing business in Russia. To Our Beloved Users, The Russian Government has passed a new law that mandates that every provider must log all Russian internet traffic for up to a year. We believe that due to the enforcement regime surrounding this new law, some of our Russian Servers (RU) were recently seized by Russian Authorities, without notice or any type of due process. We think it’s because we are the most outspoken and only verified no-log VPN provider. Luckily, since we do not log any traffic or session data, period, no data has been compromised. Our users are, and will always be, private and secure. Upon learning of the above, we immediately discontinued our Russian gateways and will no longer be doing business in the region. To make it clear, the privacy and security of our users is our number one priority. For preventative reasons, we are rotating all of our certificates. Furthermore, we’re updating our client applications with improved security measures to mitigate circumstances like this in the future, on top of what is already in place. In addition, our manual configurations now support the strongest new encryption algorithms including AES-256, SHA-256, and RSA-4096. All Private Internet Access users must update their desktop clients at https://www.privateinternetaccess.com/pages/client-support/ and our Android App at Google Play. Manual openvpn configurations users must also download the new config files from the client download page. We have decided not to do business within the Russian territory. We’re going to be further evaluating other countries and their policies. In any event, we are aware that there may be times that notice and due process are forgone. However, we do not log and are default secure against seizure. If you have any questions, please contact us at [email protected] Thank you for your continued support and helping us fight the good fight. Sincerely, Private Internet Access Team Of course, the end result of this is going to make Russian internet users a lot less safe. The war on encryption is a really dumb idea, and kudos to PIA for taking a stand.Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
Here's some good news. After decades of ridiculously bad management, it appears that the Library of Congress has a real leader. Dr. Carla Hayden has been approved by the Senate as our new Librarian of Congress by a wide margin, 74 to 18. And that's despite a last minute push by the ridiculous Heritage Foundation to argue that the Librarian of Congress should not be a librarian (and one with tremendous administrative experience). Heritage Foundation's alerts can often sway Republican Senators, so the fact that only 18 still voted against her is quite something. Hayden was also able to get past ridiculous claims that she was pro-obscenity or pro-piracy based on people who just didn't like the idea of an actually qualified person in the position. She's an exceptionally qualified librarian with administrative and leadership experience. And while I'm sure I won't agree with everything she does, it seems like a massive improvement on the previous librarian, James Billington, who famously resisted any kind of modernization efforts, and who the Government Accountability Office had to call out multiple times for his leadership failings. Billington was so bad that when he resigned, the Washington Post was able to get people to go on the record celebrating. The reaction inside the library was almost gleeful, as one employee joked that some workers were thinking of organizing a conga line down Pennsylvania Avenue. Another said it felt like someone opened a window. “There is a general sense of relief, hope and renewal, all rolled into one feeling,” said one staffer who spoke on the condition of anonymity for fear of reprisal. “Like a great weight has been lifted from our shoulders.” Maureen Moore, who retired in 2005 but volunteers at the library, said she and her friends were thrilled. “It’s a great day for the library. The man has had 27 years to do good things, and he hasn’t,” she said. It's a low bar, but Hayden will almost certainly be better than that -- and hopefully a lot better as well. She's shown in the past a willingness to stand up and fight against government surveillance and for freedom of speech and access to information. Her positions on copyright are less clear, but as she's now in charge of the Copyright Office, hopefully she'll bring some much needed balance to that office, and a greater recognition, as a librarian, of the importance of access to information, rather than locking up all info. Of course, given all that, I can pretty much guarantee that Hollywood and other legacy copyright industries are going to pump up their fight to move the Copyright Office out of the Library of Congress, and either set it up as its own agency, or dump it into the Dept. of Commerce, perhaps as part of the Patent and Trademark Office. Expect to see a big push on that very soon, including all sorts of bullshit arguments in favor of it. But remember, copyright was designed to benefit the public, and not as some sort of commercial tool that belongs in the Dept. of Commerce.Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
Security researcher Jonathan Zdziarski has been picking apart the FBI's oral testimony on the NIT it deployed in the Matish/Playpen case. The judge presiding over that case denied Matish's suppression request for a number of reasons -- including the fact that Matish's residence in Virginia meant that Rule 41 jurisdiction rules weren't violated by the FBI's NIT warrant. Judge Morgan Jr. then went off script and suggested the FBI didn't even need to obtain a warrant to deploy a hacking tool that exposed end user computer info because computers get hacked all the time. He equated this to police peering through broken blinds and seeing something illegal inside a house, while failing to recognize that his analogy meant the FBI could let themselves inside the house first to break the blinds, then peer in from the outside and claim "plain sight." The oral arguments [PDF] -- using FBI Special Agent Daniel Alfin's testimony -- were submitted in yet another case tied to the seizure of a child porn website, this one also taking place in Virginia and where the presiding judge has similarly denied the defendant's motion to suppress. The DOJ has added the transcript of the agent's oral testimony in the Matish prosecution as an exhibit to this case, presumably to help thwart the defendant's motion to compel the FBI to turn over the NIT's source code. Many assertions are made by Agent Alfin in support of the FBI's claim that its hacking tool -- which strips away any anonymity-protecting efforts put into place by the end user and sends this information to a remote computer -- is not malware. And many of them verge on laughable. Or would be laughable, if Alfin wasn't in the position of collecting and submitting forensic evidence. There's so much wrong in here, it's probably best to just start at the top. 1. A MAC address is a unique identifier that can never be altered. THE WITNESS: Yes, Your Honor. MAC is an acronym that stands for media address control. THE COURT: Is that different than IP address? THE WITNESS: Yes, Your Honor. A MAC address is unique and does not change. So you can look at the MAC address in the matter at hand from Mr. Matish's computer, and that MAC address is always the same. It is the one that was identified by the government. It was also the one that was seized by the government. A MAC address is hard-wired or burned into the card. [Compared with this, from the same agent, roughly 30 pages later…] Q. Are any of those items -- I believe you testified to the MAC address. Can that be changed? A. It can be -- 2. The FBI didn't need to encrypt the data collected by the NIT because, hey, Tor is secure and can't be compromised. Q: In one of the declarations that was submitted on behalf of Mr. Matish by Dr. Soghoian, it is alleged that because the NIT sent data over the regular Internet and not encrypted that the authenticity of the data could not be verified. A: This is incorrect. It also fails to acknowledge that the NIT was, in fact, sent to Mr. Matish's computer over the Tor network, which is encrypted. 3. Encryption would ruin the integrity of the collected evidence. Q. Would encryption of the data as it was transmitted from the computer to the government -- what effect, if any, would that have had on the utility of the data going forward? A. It would have not completely made the network data useless, but it would have hurt it from an evidentiary standpoint. Because the FBI collected the data in a clear text, unencrypted format, it shows the communication directly from Mr. Matish's computer to the government. It can be read; it can be analyzed. It was collected and provided to defense today, and they can review exactly what the FBI collected. Had it been encrypted, it would not have been of the same value, because the encrypted data stream itself could not be read. In order to read that encrypted data stream, it would have to first be decrypted by the government, which would fundamentally alter the data. It would still be valid, it still would have been accurate data; however, it would not have been as forensically sound as being able to turn over exactly what the government collected. 4. The FBI's malware is not malware because "mal" means "bad" and "FBI" means "good." Q. And, finally, would you describe the NIT as malware? A. No. The declaration of Dr. Soghoian disputes my point from my declaration that I do not believe the NIT should be considered malware, but he fails to address the important word that makes up malware, which is "malicious." "Malicious" in criminal proceedings and in the legal world has very direct implications, and a reasonable person or society would not interpret the actions taken by a law enforcement officer pursuant to a court order to be malicious. And for that reason I do not believe that the NIT utilized in this case pursuant to a court order should be considered to be malware. 5. The defense has all the data it needs to examine the FBI's NIT. Q. Okay. And you're aware that the first time that the government agreed to produce that particular data was in its response to this motion to compel? A. I assume that's the case. I don't know exactly what date it was provided on, but I know it was turned over. Q. And then you talked about a data stream being made available, right? A. Yes. Q: And you're aware that the first time that the government agreed to produce that data was in its surreply to the motion to compel. A. I don't recall the first time that that data was made available, but I know it has been made available and has been turned over. Q. As of -- A. As of today. Q. -- 20 minutes ago, correct? A. Yes. To the best of my knowledge, it was not turned over prior to that. 7. The NIT is like a set of burglar's tools... Q. You say the exploit would shed no light on what the government did. The government deployed this exploit, correct? A. The government used the exploit to deploy the NIT. Q. And I believe you used the analogy that this exploit is like a way of picking a lock, right? 8. … except that sounds really bad and not something the "good" FBI should be doing. So, now it's an open window. A. Yes. A more accurate analogy may be going in through an open window. As I've stated in my declaration, there was a vulnerability on Mr. Matish's computer. The FBI did not create that vulnerability. That vulnerability can be thought of as an open window. So we went in through that open window, the NIT collected evidence, and then left. We made no change to the window. There's plenty more to read through and Zdziarski's Twitter stream contains several highlights and some incisive analysis. Matish's lawyer also makes a very good point about the problems with using insecure data -- transmitted in unencrypted form -- as forensic evidence. To prevent tampering with the evidence. I mean, this is analogous to -- I mean, there's a crime scene. Certain evidence is collected, and rather than bagging and labeling it and following established techniques for how evidence is to be collected and transferred back to, you know, the server, which is like an evidence locker, they just threw everything in the back seat of the cruiser and drove back. Oh, and, by the way, they won't tell us whether on the way back they also picked up someone else who rode in the back of the cruiser. Or as Zdziarski puts it: FBI’s argument against encryption being forensically sound is like arguing that evidence becomes invalid if you put it into a sealed box. — Jonathan Zdziarski (@JZdziarski) July 12, 2016 He also points out that the FBI's refusal to allow Matish to examine the NIT is not at all aligned with normal evidentiary practices. We've set out through our expert declarations exactly why this information is critical, and the government is saying, no, we've looked at it, we've analyzed it; our experts say you wouldn't be able to make a meaningful trial defense based on this information. But in some ways, Your Honor, that's the same as saying, we're not telling you who our confidential informant is. You don't need to talk to him, because we're telling you he's believable and everything he's saying is true. You don't need to look at the DNA tests from the lab, because we're telling you it's a match, and we're telling you the tests were fine. Despite this, the court decided to deny the motion to suppress and Matish will be dealing with the evidence collected against him. According to this testimony, it isn't much -- some images found in unallocated space, suggesting they had been deleted. That's not much but it may be enough to secure a conviction. But the testimony gives us greater insight into the FBI's handling of forensic evidence and its perception of the exploits at its disposal. And what's on display here is far from encouraging. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
Another week, another CFAA (Computer Fraud & Abuse Act) ruling out of the 9th Circuit Appeals Court. This time it's the infamous Facebook v. Power.com case that's been going on since 2008. When we first came across the case, in early 2009, we insisted that it made no sense. Power.com was trying to set itself up as a sort of "meta" social network, or perhaps a social network management system, where users could have a dashboard for all their different social networks. Facebook didn't like this and sued over a long list of things, including copyright and trademark infringement, unlawful competition, violation of anti-spam laws... and the CFAA. Most of the claims went nowhere, but the CFAA and anti-spam ones lived on (because Power.com had systems for sending emails to users). The copyright claims were troubling, but the CFAA claims were the ones that concerned us the most. Of course, it's taken many, many years for the case to make its way through the courts, and Power.com ceased even existing about five years ago. And the latest ruling is not just a nail in the coffin, but a potentially problematic CFAA ruling. While the court tosses out the CAN SPAM arguments, it does say that Power's actions were a CFAA violation. It's not as bad as it could have been, because the court doesn't say that merely violating Facebook's terms of service violates the CFAA, but instead narrows it slightly. It says that because Facebook sent a cease and desist letter to Power, from that point on it was on notice that it was not authorized to access Facebook's servers. It was the move to continue getting Facebook user data that sealed the CFAA claim. Here, initially, Power users arguably gave Power permission to use Facebook’s computers to disseminate messages. Power reasonably could have thought that consent from Facebook users to share the promotion was permission for Power to access Facebook’s computers. In clicking the “Yes, I do!” button, Power users took action akin to allowing a friend to use a computer or to log on to an e-mail account. Because Power had at least arguable permission to access Facebook’s computers, it did not initially access Facebook’s computers “without authorization” within the meaning of the CFAA. But Facebook expressly rescinded that permission when Facebook issued its written cease and desist letter to Power on December 1, 2008. Facebook’s cease and desist letter informed Power that it had violated Facebook’s terms of use and demanded that Power stop soliciting Facebook users’ information, using Facebook content, or otherwise interacting with Facebook through automated scripts. Facebook then imposed IP blocks in an effort to prevent Power’s continued access. The record shows unequivocally that Power knew that it no longer had authorization to access Facebook’s computers, but continued to do so anyway. This is potentially a limited ruling, since there are a lot of specifics here. But it does still seem troubling. If I, as a user, wish to grant a service like Power access to my data, why can't I do so? The court insists that even if it's your information and you want to allow a service like Power to do so, Facebook has the final say -- because of something to do with banks and guns. Really. The consent that Power had received from Facebook users was not sufficient to grant continuing authorization to access Facebook’s computers after Facebook’s express revocation of permission. An analogy from the physical world may help to illustrate why this is so. Suppose that a person wants to borrow a friend’s jewelry that is held in a safe deposit box at a bank. The friend gives permission for the person to access the safe deposit box and lends him a key. Upon receiving the key, though, the person decides to visit the bank while carrying a shotgun. The bank ejects the person from its premises and bans his reentry. The gun-toting jewelry borrower could not then reenter the bank, claiming that access to the safe deposit box gave him authority to stride about the bank’s property while armed. In other words, to access the safe deposit box, the person needs permission both from his friend (who controls access to the safe) and from the bank (which controls access to its premises). Similarly, for Power to continue its campaign using Facebook’s computers, it needed authorization both from individual Facebook users (who controlled their data and personal pages) and from Facebook (which stored this data on its physical servers). Permission from the users alone was not sufficient to constitute authorization after Facebook issued the cease and desist letter. The analogy seems a bit stretched, though I do get it. These are Facebook's servers -- but it still does seem troubling that Facebook is basically using the CFAA to block what was really just a service trying to make Facebook more useful to users. This wasn't what one would normally think of as "hacking" in any real sense, which is what the CFAA was designed to respond to. And, as we've seen with the CFAA, this ruling seems wide open to abuse by companies. Furthermore, I'm uncomfortable with an argument that is basically the same argument as "if we tell you not to access this open web server, then it's like trespassing." Because it's not like that at all. An open web server is designed to accept traffic. Someone merely telling you that you can't access their website -- even though it's easy to do so technologically -- doesn't seem like it should then be seen as "unauthorized access" in a manner that makes you liable to computer hacking laws. That's a recipe for dangerous results. At what point is access revoked? Does it require a full cease and desist letter? Or what if I add a drop-down telling visitors from certain IP addresses they're not welcome? What if I just type here that visitors from the state of New York are no longer allowed to visit Techdirt? If they continue to do so, is that a potential CFAA violation in the making? The same court has already ruled that a mere terms of service violation is not a CFAA violation but where's the line between a terms of service violation and a cease-and-desist letter? Or me just telling you to stop visiting my website? It seems wide open to abuse. The CFAA remains a mess of a law, and rulings like these are likely only going to lead to more litigation around borderline cases. And that's bad. It's going to be bad for users and it's bad for innovation. It's been particularly disappointing to see companies like Facebook and Craigslist coming down on the wrong side of CFAA litigation -- in both cases going after companies who were not "hacking" in any traditional sense, but were rather looking to add useful layers of services on top of existing services. The law is being abused by companies that don't want others to innovate, and that's unfortunate and bad for innovation.Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
The only way to make "bad optics" surrounding a questionable recruiting video vanish is to make the bad video vanish first, right? That's obviously what the Minneapolis Police Department believes. It has nuked its controversial recruiting video from DMCA orbit, netting citizen journalist Wedge Live a copyright strike for preserving something the MPD would rather just went away. Twitter user Tony Webster pointed out the end result of the MPD's efforts, which removed the video formerly located here (Update: as this story started to get press attention, it appears that the Minneapolis PD has rescinded its takedown). Fortunately for us -- and less fortunately for the MPD -- the video has been uploaded to Vimeo by Wedge Live, where it presumably awaits another questionable DMCA takedown notice from the police department. The MPD used to be quite proud of its video, until it generated some complaints about its aggressive imagery. The video opens with two poorly thought out shots. In the first, a man in military gear pointing an assault rifle morphs into an MPD officer… carrying an assault rifle. The second shows a female beginning to throw a softball, which then morphs into a female police officer… pointing a gun at the camera. Neither of these opening shots do much to set the stage for the rest of the video, which is the usual assortment of talking heads and officers-in-action shots after that point. Nonetheless, the MPD does not host the video at its own YouTube channel, and on July 13 removed its link to the video from its own recruitment page. The archived version contains a link to the video. (And the link still works, but it's not hosted at YouTube.) The updated version does not. If it wasn't for the MPD's efforts to remove all traces of the video, this might have been chalked up to just a misguided effort to flex copyright muscle over something that was created with public funds and should, generally speaking, belong to the public, rather than the police department. But, considering the MPD has removed the link from its own webpage, it looks a whole lot more like an agency abusing the DMCA takedown system to remove something it considers to be less-than-flattering, especially in light of the Philando Castile shooting -- in which an officer killed Castile for attempting to produce the ID the officer had just asked for. Castile was carrying a gun, but had a concealed carry permit and had informed the officer of the fact. When he reached for his ID, the officer shot him four times. The aftermath of the shooting -- as Castile died in his car next to his girlfriend and daughter -- was streamed live to Facebook. So, it's not surprising the MPD would want its recruiting video to vanish, seeing as it opens up in an aggressive and militarized manner. Unfortunately, the web doesn't forget just because the DMCA process has been abused. The MPD will have to live with its poor decisions for much longer than it planned to. Permalink | Comments | Email This Story

Read More...