posted 8 days ago on techdirt
The Privacy and Civil Liberties Oversight Board (PCLOB) is supposed to be an independent body that makes sure that the intelligence community is not abusing its surveillance powers. It was created to go along with the PATRIOT Act, as a sort of counterbalance, except that it initially had basically no power. In 2007, Congress gave it more power and independence and... both the Bush and Obama administrations responded by... not appointing anyone to the PCLOB. Seriously. The Board sat entirely dormant for five whole years before President Obama finally appointed people in late 2012. Thankfully, that was just in time for the Snowden revelations less than a year later. The PCLOB then proceeded to write a truly scathing report about the NSA's metadata collection under Section 215 of the PATRIOT Act, calling it both illegal and unconstitutional. While the PCLOB was less concerned about the NSA's Section 702 program (which includes both PRISM and "upstream" collection from backbone providers) the group has been working for nearly two years on an investigation into Executive Order 12333 -- which is the main program under which the NSA spies on people. However, as Marcy Wheeler points out, Congress seems to be bending over backwards to try to undermine and undercut the PCLOB. That's especially unfortunate, because at one point there was even a bipartisan effort to give the PCLOB more power, but things seem to have gone the other way instead: As I reported, during the passage of Intelligence Authorization last year (which ultimately got put through on the Omnibus bill, making it impossible for people to vote against), Congress implemented Intelligence Community wishes by undercutting PCLOB authority in two ways: prohibiting PCLOB from reviewing covert activities, and stripping an oversight role for PCLOB that had been passed in all versions of CISA. In the 2017 Intelligence Authorization HPSCI passed on April 29, it continued more of the same. The new changes are subtle, but problematic. The first is that the PCLOB is limited to spending money only on issues for which Congress has directly approved the spending. In other words, if Congress doesn't want the PCLOB investigating a certain area, no problem, it can just make it clear that funding does not cover that area. That kind of voids the PCLOB's supposedly "independent" nature. The second issue is that it requires that the PCLOB warn intelligence community bosses if they're going to investigate a new program. While these changes may not seem like a big deal, they do suggest a clear attempt to undermine the power and authority of the PCLOB. Perhaps that's why the head of the PCLOB, David Medine, resigned early, before his appointment was up, just a few months ago. At a time when we need a lot more independent oversight of government surveillance powers, it's unfortunate to see Congress apparently pushing for less oversight.Permalink | Comments | Email This Story

Read More...
posted 8 days ago on techdirt
Given the fact that the FCC has recently bumped the standard definition of broadband to 25 Mbps to highlight competition gaps; reclassified ISPs as common carriers; passed real net neutrality rules for the first time ever; taken aim at the industry's use of protectionist state law to keep the duopoly intact; pushed for improved broadband privacy rules, and is now taking aim at the cable industry's monopoly over cable set top hardware, it's not really surprising that the cable industry isn't happy right now. One could argue (especially if you've studied regulations across the pond) that this is just what it looks like when a telecom regulator is doing its job after falling asleep for arguably fifteen years. But former FCC boss turned top cable lobbyist Michael Powell sees things differently. Powell took the opportunity at the cable industry's annual INTX trade show in Boston to throw a bit of a hissy fit, complaining repeatedly that the industry was under "relentless" and unprovoked "regulatory assault":"We find ourselves the target of a relentless regulatory assault,” Powell told attendees. “The policy blows we are weathering are not modest regulatory corrections. They have been thundering, tectonic shifts that have crumbled decades of settled law and policy."...What has been so distressing is that much of this regulatory ordinance has been launched without provocation," said the NCTA head. "We increasingly are saddled with heavy rules without any compelling evidence of harm to consumers or competitors."Who says telecom lobbyists can't be comedic geniuses? Of course the cable industry enjoys some of the worst customer satisfaction ratings of any industry in America thanks to generations of regulatory capture and little real competition in broadband. After a generation of treating captive consumers poorly there's really not a more hated sector than cable, and the industry's reputation is only getting worse as it rushes to take advantage of limited competition and impose usage caps. As a result, complaints to the FCC have been skyrocketing. "Compelling harm" should be apparent to everyone just by looking at their cable and broadband bill, and every time they call Comcast customer support. And despite a lot of cable sector chirping about "innovation," as AT&T and Verizon back away from unwanted DSL markets, cable broadband's monopoly is only growing in the face of less competition, meaning less incentive than ever to compete on price or improve customer service across huge swaths of territory. And you really can't find a man more responsible for keeping this status quo intact than Powell, who ran the FCC from 2001 to 2005. Powell was a vibrant example of sector dysfunction and revolving door regulators; completely incapable of even admitting the TV or broadband sectors had or has problems. His tenure was just one chapter of a more-than-fifteen-year, bipartisan stretch during which the FCC was little more than a lapdog to the sector it was supposed to be policing. As such, cable enjoyed decades of almost total local, state and federal regulatory capture, all while crowing about the immense benefits of "free markets." The result of this aggressive dysfunction forged the cable industry we all know and love today. Powell is best remembered for his decision to try and push broadband over powerline as a major third avenue of sector competition, thereby justifying regulatory inaction on other fronts. But Powell intentionally ignored something everybody in telecom had known for years: the technology would never actually work due to the massive radio interference it caused. But by braying about broadband over powerline being the "great broadband hope," Powell managed to deflect criticism that he was busy actually making the sector substantially worse through total inaction and ineptitude. Other FCC bosses like Kevin Martin and Julius Genachowski carried on that proud tradition. Fast forward a decade and Powell's now lobbying for the very companies he once "regulated," complaining about unfair persecution of an industry that has been begging for a kick in the teeth for the better part of most of our adult lives. And while there are certainly plenty of sectors that deserve a hands-off regulatory approach to protect fledgling organic market evolution, the cable sector is a unique, braying beast built on the back of apathy, revolving door regulation, and an utter disdain for the captive consumers the sector serves. As such, Powell won't find too many people crying themselves to sleep just because the FCC finally decided to do something about it.Permalink | Comments | Email This Story

Read More...
posted 8 days ago on techdirt
So over in the UK, they just had the annual Queen's Speech in which the Queen lays out a bunch of regulatory proposals, and (as per usual) it's a bit of a mixed bag when it comes to the internet. As plenty of the headlines have blared, one part calls for universal broadband access, with a minimum speed of 10 Mbps (I'm assuming they're only talking about downstream speeds, rather than symmetrical, but who knows...). It would also include "automatic compensation" if your internet connection goes down. That's a very good idea as a starting point (I'd argue the speed should be even higher, but it's a start). But... with that comes some things that sound a lot... worse. First off, there would be an expansion of the ridiculous "porn licensing" program in the UK whereby sites will need to do "age verification" if they have adult content. Not that anyone's saying that porn should be easily accessible to kids, but age verification is hardly foolproof, and can lead to a variety of other problems, including undermining the privacy of web surfers and just a general chilling effect on creating certain types of content online, for fear of it being locked away or filtered if it's deemed too mature. There are also concerns about how the government implements this ridiculous plan for 10 year prison sentences for infringers, and how that will impact a free and open internet. And then there's the expansion of internet surveillance that is equally worrisome. There's a lot of stuff about "restricting extremist activity" and trying to stop the children from being radicalized ("think of the children!"). In theory, those must sound like nice ideas, but in practice, they're a broad framework for a massive censorship regime. Free speech groups are already raising concerns about all of this: The new proposals should avoid creating an environment that could make it even harder for people of all faiths and ideologies to express their beliefs and opinions, the groups said. Current legislation already prohibits incitement to violence and terrorism, and a compelling case for broadening them further through civil measures has not been made. “The government’s move to counter extremism must not end up silencing us all,” said Jodie Ginsberg, Chief Executive of Index on Censorship. “We should resist any attempts to make it a crime for people of faith to talk publicly about their beliefs, for political parties to voice unpopular views, and for venues from universities to village halls to host anyone whose opinions challenge the status quo. We urge the government to use its consultation to ensure this does not happen.” As with many regulations, many of these feel like "x is a problem, something should be done, this is something" kinds of solutions, without much thought or concern to the nuances behind the implementation and the wider consequences (intended or not) of those proposals. That's unfortunate, especially when it comes to a platform as important and central to our lives as the internet.Permalink | Comments | Email This Story

Read More...
posted 8 days ago on techdirt
It's a mantra I've been repeating for some time now, but the alcohol and brewing industry has a trademark problem on its hands. We've seen instance after instance of the explosion in the craft brewing industry being hampered and harassed over trademark concerns, both from within the industry and from the outside. Most of these disputes lay bare the fact that trademark law has moved well beyond its initial function of preventing consumer confusion into a new era of corporate bullying and protectionism. But at least in most of these instances, the victim of all this is a victim once. Larry Cary, on the other hand, must be starting to feel like a punching bag, having had to now twice change the name of his alcohol-making business over trademark concerns. Cary opened North Coast Distilling on Duane Street in 2014. In October, he was sued by California-based North Coast Brewing, and spent about $10,000 changing his name to Pilot House Spirits. Then Cary was sued in January by House Spirits Distilling, which claimed his new name violates "established valuable trademark rights and goodwill throughout the United States." The distillery, known for Aviation American Gin, has registered "House Spirits" and "House Spirits Distillery" with the U.S. Patent and Trademark Office. His business will become Pilot House Distilling as part of a settlement with House Spirits Distilling. To be fair, in the current climate, both of these trademark disputes ring as fairly valid from the complainant's perspective. The names in both instances were similar enough that I can understand the concern. That the solution the second go around was a name nearly indistinguishable from the name that had so offended House Spirits Distilling at once raises the question as to just how injurious the original was in the first place, but can also be seen as House Spirits Distilling behaving in an accommodating way. In other words, I can't really say there are any bad guys in this story. The problem instead is one of bloat. For the alcohol industry, there is at once an era of increased trademark protectionism, an era in which the bar for originality and uniqueness to get the USPTO to approve a trademark has clattered to the floor, and an era of explosion in participation in the industry. That's a recipe for strife and confusion over who is allowed to enter the market using what language and under what circumstances. Some might fairly point out that these trademark protections have contributed to the explosive market to begin with. I would argue vehemently with them, but even for those on that side of the argument there must certainly be an acknowledgement that we're quickly approaching the level of diminishing returns. If the industry wants to continue to grow, it should be paying attention to the hurdles its placing in front of startup participants. Cary said he'll have to spend another $10,000 to $15,000 changing the name on all his products and properties to Pilot House Distillery, adding that every time he names his business or products, he checks trademarks. "What I've learned is even if you do everything right and you trademark it… if someone has bigger pockets than you, they can do whatever they want," he said. What happens when we're faced with more stories of business folk playing brand-name musical chairs, all because there is too little language left available? Permalink | Comments | Email This Story

Read More...
posted 8 days ago on techdirt
Geology is the ultimate riddle. All we have is a snapshot in time — the earth as it stands today — but within that snapshot are the remnant clues to untangling four and a half billion years of planetary development. Every turned stone might answer a question, or it might raise some new ones, as these latest steps towards a complete understanding of our planet's geology demonstrate. Fossilized pebbles found in Australia might upend our timeline of how earth came to sustain life. The 2.7-billion-year-old stones show signs of oxygen in the atmosphere that wasn't supposed to show up until a few hundred-million years later when algae started pumping it out. [url] The shape of Hawaii and the underwater islands that share its formation has long been a source of debate, and a radical new idea seeks to explain it. Instead of relying on plate tectonics, the new model suggests it all had to do with mixing plumes of mantle. [url] A NASA satellite recently found something odd about the Caspian Sea: its bed is covered in mysterious scrape marks. Though it's possible the scratches are man-made, the most likely explanation is ice gouging during the sea's annual thaw. [url] After you've finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff. Permalink | Comments | Email This Story

Read More...
posted 8 days ago on techdirt
If your social media "presence" has been submitted as evidence, you'd better leave everything about it unaltered. That's the conclusion reached by the judge presiding over a Fair Housing Act lawsuit. The plaintiff didn't go so far as to delete Facebook posts relevant to the case at hand, but did enough that the defense counsel (representing the landlord) noticed everything wasn't quite the way it was when the plaintiff was ordered to preserve the evidence. According to one of the lawyers for the defense, she accessed plaintiff’s accounts at one point despite not being “friends” with plaintiff. She later looked at the accounts and saw many posts were missing. The Plaintiff also testified that, to her knowledge, she never deleted anything. She did hide a few posts from her timeline which appeared there because she had been tagged by others. She said she thought she originally set her Facebook account to private and she merely double checked this after defendant filed its spoliation motion. Whether or not the plaintiff was telling the truth about the Facebook account's privacy settings ultimately doesn't matter. She changed something after being instructed not to. This resulted in posts being hidden from public view. According to the court, this flip of a digital switch was a violation of the order to preserve evidence. By altering her Facebook account, Thurmond violated the Court’s May 21 order. Her conduct had the effect of hiding her postings from public view, and hence from defendants’ counsel’s view. There were no sanctions for this action. Just a few stern words from the judge. The damage done was minimal as the defense counsel was still able to obtain the "missing" posts. The plaintiff herself offered to print out the hidden posts in an effort to comply with the order. Of course, this offer came after she had altered the privacy settings and the defense counsel had noticed the alteration. The damage, however, could cost the plaintiff her case, even if the judge isn't going to issue sanctions for violating a preservation order and even though the defense was able to recover the missing posts. Of course, it does not appear that the postings were deleted, and they remain available for defendants’ use, and defendants have not shown that they were prejudiced by Thurmond’s conduct in violating the order. Nevertheless, it is troubling that the posts were removed from public view after this Court issued a consent order designed to preserve the status quo of her social media accounts. Also troubling is Thurmond’s execution of an affidavit that contained a statement she knew to be inaccurate. Although the false statement was ultimately immaterial to the issues in the pending motions, Thurmond’s willingness to sign the affidavit knowing or having reason to know that it included a false statement threatens the integrity of the judicial process. Thurmond’s conduct in both respects is certainly a fair subject for cross-examination at trial and could result in the impeachment of her credibility. As Venkat Balasubramani points out, changing privacy settings on relevant social media accounts during litigation is something to do "at your own peril." In this case, the damage was minimal. At most, the plaintiff undercut her own credibility. That may cost her a positive ruling, but it won't result in anything more serious like jail time. What is a larger problem are the federal rules for evidence preservation, which include preserving evidence you possibly won't even know is evidence until you've been indicted. As we've seen in the past, rules meant to prevent corporations from using culpatory documents for bonfire fuel are instead being used by the feds to stack charges against defendants who've done normal computer housecleaning, like culling hard drive clutter or clearing their browser history. Sarbanes-Oxley says evidence -- which now apparently includes every bit of your digital presence in addition to physical files -- relevant to "foreseeable investigations" must be preserved. Since citizens don't initiate investigations, the ball is completely in the government's court, and every investigation seems "foreseeable" once it's underway. Those being investigated may not have seen it coming, but they're still saddled with a post facto requirement to preserve evidence dating back to whatever arbitrary point the government declares to be the beginning of the alleged wrongdoing. Civil litigants may get away with nothing more than some words from an irritated judge, but federal defendants won't be nearly as lucky. Thanks to the misuse of this law, anyone changing privacy settings to a social media account does so "at their own peril." Permalink | Comments | Email This Story

Read More...
posted 8 days ago on techdirt
Those of us who dwell on the internet already know the Internet Archive's "Wayback Machine" is a useful source of evidence. For one, it showed that the bogus non-disparagement clause KlearGear used to go after an unhappy customer wasn't even in place when the customer ordered the product that never arrived. It's useful to have ways of preserving web pages the way they are when we come across them, rather than the way some people would prefer we remember them, after vanishing away troublesome posts, policies, etc. Archive.is performs the same function. Screenshots are also useful, although tougher to verify by third parties. So, it's heartening to see a federal judge arrive at the same conclusion, as Stephen Bykowski of the Trademark and Copyright Law blog reports. The potential uses of the Wayback Machine in IP litigation are powerful and diverse. Historical versions of an opposing party’s website could contain useful admissions or, in the case of patent disputes, invalidating prior art. Date-stamped websites can also contain proof of past infringing use of copyrighted or trademarked content. The latter example is exactly what happened in the case Marten Transport v. PlatForm Advertising, an ongoing case in the District of Kansas. The plaintiff, a trucking company, brought a trademark infringement suit against the defendant, a truck driver job posting website, alleging unauthorized use of the plaintiff’s trademark on the defendant’s website. To prove the defendant’s use of the trademark, the plaintiff intended to introduce at trial screenshots of defendant’s website taken from the Wayback Machine, along with authenticating deposition testimony from an employee of the Internet Archive. The defendant tried to argue that the Internet Archive's pages weren't admissible because the Wayback Machine doesn't capture everything on the page or update every page from a website on the same date. The judge, after receiving testimony from an Internet Archive employee, disagreed. He found the site to a credible source of preserved evidence -- not just because it captures (for the most part) sites as they were on relevant dates but, more importantly, it does nothing to alter the purity of the preserved evidence. [T]he fact that the Wayback Machine doesn’t capture everything that was on those sites does not bear on whether the things that were captured were in fact on those sites. There is no suggestion or evidence … that the Wayback Machine ever adds material to sites. Further, the judge noted that the archived pages were from the defendant's own website and he'd offered no explanation as to why pages from his own site shouldn't be considered as evidence of alleged infringement. It's nice to know that what many of us have considered an independently-verifiable source of evidence is also acceptable in federal courts. It's more than just a handy way to preserve idiotic statements and potentially-illegal customer service policies. It's also a resource for litigants who might find their opponents performing digital cleanups after a visit from a process server. Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
Recently, we covered the ongoing jailing of a former Philadelphia police officer for his refusal to unlock encrypted devices for investigators. "John Doe" is suspected of receiving child porn but the government apparently can't prove its case without access to hard drives and Doe's personal computer. So far, it's claiming the evidence it's still seeking is a "foregone conclusion" -- an argument the presiding judge found persuasive. The "foregone conclusion" is based on an interview with Doe's estranged sister, who claims she once saw something resembling child porn on Doe's computer -- although she can't say for sure whether it involved the devices the government seeks access to -- and its own expert, who says it's his "best guess" that child porn can be found on the devices. Hardly compelling, but compelling enough that Doe has spent seven months in jail to date. The government has filed its response to Doe's motion to stay the contempt order. It argues that Doe can spend the rest of his life in jail for all it cares. If he wants to be released, he just needs to unlock the encrypted devices. (via Brad Heath) Doe faces no irreparable harm in the absence of a stay. In arguing otherwise, what he fails to recognize is that his imprisonment is conditional – it is based entirely on Doe’s continued defiance of the district court order. There can be no question that loss of liberty is a recognized harm. But Doe’s incarceration is by his own hand. His release pending an appeal is entirely avoidable through obedience to the court order. The government goes on to point out that Doe -- once he's unlocked the devices -- can then present his arguments for evidence suppression. Doe could choose to obey the court’s directive by unencrypting his devices, and his release would be granted. This is no way affects his appeal. He would still be able to persist in his appeal, and, if successful, the evidence the government would gain through forcing Doe to unencrypt his devices would be suppressed. The “irreparable harm” Doe complains of now is not “irreparable” in any sense, as it is entirely within Doe’s control. As the government notes, civil contempt charges are meant to be coercive. As such, the only person keeping Doe from being released from prison is Doe himself. Of course, if the drives contain what the government claims they contain, he'd just be exchanging an indefinite sentence for a more finite one. The added wrinkle to this case is the terms of Doe's confinement for contempt. Doe is in solitary confinement -- something the UN has declared to be torture -- supposedly for his own protection. It's generally true that the prison population has no love for child porn fans. They're not overly fond of imprisoned law enforcement officers either. And the nuances of the case -- that Doe has not actually been convicted of child porn charges but rather has been jailed for contempt of court -- will likely go unexamined by other inmates. So, it may be that Doe's solitary confinement would be less torturous than spending time in general population, but at the end of it, we have a person jailed indefinitely in solitary confinement for nothing more than contempt charges. The government's arguments on behalf of the jailing seem to assert that it has plenty of evidence already in hand. If so, the question is why the government hasn't moved forward with prosecution, rather than pushing for Doe to decrypt his devices. Either it has a case or it doesn't. If it doesn't, then the indefinite jailing is punitive -- a punishment for the defendant not being more helpful in building a case against himself, which is the root of Fifth Amendment protections, no matter how the government chooses to phrase it. Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
Stories about copying turn up a lot on Techdirt. That's largely as a consequence of two factors. First, because the Internet is a copying machine -- it works by repeatedly copying bits as they move around the globe -- and the more it permeates today's world, the more it places copying at the heart of modern life. Secondly, it's because the copyright industries hate unauthorized copies of material -- which explains why they have come to hate the Internet. It also explains why they spend so much of their time lobbying for ever-more punitive laws to stop that copying. And even though they have been successful in bringing in highly-damaging laws -- of which the DMCA is probably the most pernicious -- they have failed to stop the unauthorized copies. But if you can't stop people copying files, how about stopping them from doing anything useful with them? That seems to be the idea behind an IBM patent application spotted by TorrentFreak, which it summarizes as follows: Simply titled "Copyright Infringement Prevention," the patent's main goal is to 'restrict' the functionality of printers, so they only process jobs when the person who’s printing them has permission to do so. It works as follows. When a printer receives a print job, it parses the content for potential copyrighted material. If there is a match, it won't copy or print anything unless the person in question has authorization. As with so many patents, the idea is simple to the point of triviality: only a company more concerned about the quantity of its patents, rather than their quality, would have bothered to file an application. Nonetheless, it's a troubling move, because it helps legitimize the idea that everything we do -- even printing a document -- has to be checked for possible infringements before it can be authorized and executed. But why stop with printers? We've already seen Microsoft's Protected Media Path for video, a "feature" that was introduced with Windows Vista; it's easy to imagine something a little more active that matches the material you want to view or listen to against a database of permissions before displaying or playing it. And how about a keyboard that checks text as you type it for possible copyright infringements and for URLs that have been blocked by copyright holders? There is a popular belief that the computer in Stanley Kubrick's "2001: A Space Odyssey" was named "HAL" after IBM, by replacing each letter in the company name with its predecessor. That's apocryphal, but with this latest patent application IBM is certainly moving squarely into HAL territory. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
The weird saga of the insanely thin-skinned Turkish President Recep Tayyip Erdogan continues. As you'll recall, he's on a legal crusade against a German comedian who recited a purposely ridiculous insulting poem about Erdogan on TV (as a response to the stories about Erdogan's thin skin). Erdogan's lawyers found a little used (and little known) "lese majeste" law on the German legal books that makes it a crime to insult representatives of foreign nations. The comedian, Jan Bohmermann, admits that the poem in question was over the top, but that was the point. When you hear about a foreign leader spending so much effort on trying to sue anyone who insulted him, no matter how slight, it's actually pretty tempting to add to the pile of insults. For ridiculous geopolitical reasons, German Chancellor Angela Merkel has allowed the case to move forward, and now a Hamburg court has told Bohmermann that he has to stop repeating at least some of the poem so as not to offend the sensitive ears of Erdogan: In Tuesday's ruling the court found that "Erdogan does not have to put up with the expression of certain passages in view of their outrageous content attacking (his) honour." Why not? While it may sound flip, it's a serious question. He's the leader of a country of almost 80 million people. Shouldn't we be at least a little concerned that he apparently turns into a cowering puddle of emotions the second people make fun of him? Most people put up with other people insulting them just fine, and we aren't leaders of a major nation state. Why is a German court so willing to toss out any basic free speech rights around satire just to please a foreign leader who can't take a joke? The court didn't ban the entire poem, but even just picking what can and can't be said seems like a ridiculous thing for a court to be involved with at all: The court ruled that only six lines of the 24-line poem by German comedian Jan Boehmermann could be recited, offering the Turkish leader a partial legal victory. What a shameful ruling for Germany.Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
Dive into the world of microcomputers with the Complete Raspberry Pi 3 Starter Kit. For $120 (55% off), you will receive a Raspberry Pi 3 and a quick start kit which includes an 8 GB SD card with Raspbian OS pre-installed, power cord and various cables to get your Raspberry Pi 3 up and running in no time. You also gain access to 6 courses covering everything from how to automate your home to building robots to parallel programming and more to help you take full advantage of what the Raspberry Pi 3 is capable of. If you already have a Raspberry Pi 2, most of your accessories will work with the 3. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
The Copyright Office has been holding a series of "roundtable discussions" on copyright reform that it's going to use to produce a paper supporting certain changes to copyright law. We already know that some sort of copyright reform bill is expected in the near future, and what comes out of this whole process is going to be fairly important. Unfortunately, the roundtables are not encouraging. There was one held in NY a few weeks ago, which Rebecca Tushnet blogged about in great detail, and I attended the ones last week in San Francisco and I've gathered up my tweeted commentary, if you feel like reading through it. Unlike the House Judiciary Committee roundtable that was held in Silicon Valley last year, where the Representatives surprised many of us by actually asking good questions and listening to the answers, the Copyright Office's roundtables were bizarre and troubling. First, the whole setup of the two two-day events was problematic. The Copyright Office wanted to make sure that everyone who applied to speak was allowed to participate in some manner, so for each set of roundtables, they set up 7 roundtables of 20 people each on pre-defined topics, where each roundtable was only 90 minutes. This was problematic in many ways, because the very fact that the Copyright Office already assigned the topics of each "panel" suggested the preconceived notions that the Office came into the hearings with. Second, 20 people in 90 minutes meant a kind of soundbite culture, rather than an actual discussion. Basically, 4 people from the Copyright Office sat up front and would declare the topic and ask a single (usually general) question to kick off the panel. Then anyone who wanted to respond would turn their name placards on the side. Each person speaking was told they had 2 minutes, and a Copyright Office person would hold up cards counting down the time until finally holding up a giant STOP sign when time ran out. If the Copyright Office people felt like it, they might pepper you with follow up questions, or just move on to the next upturned placard. Only a few times during two full days of everyone locked in a courtroom was there any real discussion or attempt at moving the ball forward on issues. Instead, there were a lot of 2 minute speeches, often totally contradictory, and sometimes with people saying blatantly false things. The hearings were not livestreamed, though there was a court reporter who apparently is creating a transcript that will be distributed at some point (hopefully soon). While anyone could sign up to participate, the public was definitely a big missing factor. The Copyright Office itself repeatedly seemed to act as if copyright law is about two industries at war with each other -- "content" v. "tech" -- and unfortunately seemed to think that all content creators who mattered were against the tech industry. While the public could, in theory, have attended, there was very little seating available for non-participants, and we were all quite literally locked into a courtroom in San Francisco (getting out was always fun as we had to wait for the doors to unlock). If I had to summarize the most general theme throughout the hearings, it was this: many individuals and businesses that relied on a certain way of doing business in past decades have not been able to adapt to the changing internet world, and they're very upset about this. They see that companies like Google (especially) are now making lots of money, and assume that somehow that money belongs to them. This ridiculous chart was waved around by one of the participants. It's basically the old way of doing business lashing out at the new way. Attempts to point out that there are many more new content creators today, and many more ways to make money, were dismissed, ignored or ridiculed. Attempts to point out that the "obvious" solutions weren't at all obvious and might cause more harm than good were similarly ignored. There were many attempts to "blame Google" and to use copyright law to "force Google" to do... something. Many of the things that people were asking for, Google actually already does, but people wanted more. With that as background, here are twelve really troubling ideas that were raised during the hearing, many (though not all) by representatives from the Copyright Office itself: Notice & Staydown: If there was one overarching theme to the hearings it was the idea that many kept pushing for a "notice and staydown" regime to replace the current "notice and takedown" regime in the DMCA today. There would be weird alternating comments from people where a recording industry person would say something like "look, we need notice & staydown!" and then someone who actually understood the technology or the law would explain the problems of a notice & staydown regime, only to have the next person ignore all of that and say "what we truly need is notice and staydown." Concerns raised about abusive takedowns were mostly ignored. The Copyright Office kept arguing that people can just counternotice abusive takedowns, and didn't seem to care that evidence suggests many people are too afraid to counternotice. They also totally ignored the fact that if you move to a notice and staydown regime, there won't be notices to counter in many cases. Related to this, there seemed to be little willingness to recognize that copyright is context specific, and that the same content in one realm may be infringing, while in another may be non-infringing. Which brings us to: Notice & Staydown For Full-Length Content: This appears to be the "compromise" solution that the Copyright Office kept pushing during the discussions, repeatedly bringing this up as an option. Their argument is that "okay, some people are worried about notice and staydown interfering with fair use, but 'full-length content' isn't fair use, so perhaps we just say that if it's 'full-length' movies, books and music, then any platform needs to have a notice & staydown setup." Of course, there are numerous problems with this as well. First off, such filtering platforms are both expensive and often not very good (see: ContentID, which cost $60 million to build, and still sucks). A requirement for such a filtering system would basically stop all new entrants into the market and lock the big players (YouTube, Facebook) in as the dominant platforms. It also doesn't make sense for all kinds of content. A lawyer from Wikimedia pointed out that a technology filter mandate would be insane for Wikipedia, since its human editors already seemed to be better and more efficient at removing infringing stuff. Most importantly, plenty of full-length content may be non-infringing. I mean, we just had the Google Books ruling not too long ago, which found that scanning, indexing and storing full-length books was fair use. Or there's the Bloomberg/Swatch case where full-length recordings of investor conference calls were used in a way that was fair use. Or what about people backing up their own movies? After all, one of the most famous copyright lawsuits ever, the Betamax case over the legality of home video recordings, found that recording full-length video... was fair use. Creating a bright line rule saying that full-length content isn't fair use would go against settled law and harm many forms of innovation. And that brings us to: One-size-fits-all tech mandates and carve-out attempts: Because of the points above, the Copyright Office and some others kept trying to see if there were ways to write the rules so that they would only target certain players (i.e., "Google"). This makes little sense for a variety of reasons. First, as Google itself noted, when you're talking about something like YouTube, it already offers a notice-and-staydown tool in the form of ContentID, but many people choose not to use the staydown portion, because Google also offers an option to "monetize" those works. Whether or not we agree with how ContentID works, it's clear that it already provides a lot more than what the law currently says YouTube needs to do, and people still aren't satisfied. But it's nearly impossible to create mandates for technology in a way that doesn't (1) harm many other companies in the space and (2) create weird and dangerous incentives. The most common suggestion was some sort of special safe harbor for "small" companies, so that the next startup wasn't burdened with having to build or buy a filter... until they reached a certain size. But that doesn't work either. Both Wikimedia and the Internet Archive noted that in terms of traffic, they're both pretty big... but they're also both non-profits with limited funding and where a mandated tech filter would be both prohibitively expensive and total overkill. So, then the Copyright Office suggested maybe a carve-out for "non-profits." But, as someone from eBay pointed out, that still creates problems for a site like eBay. eBay notes that it doesn't have a huge issue of copyright infringement, but there is always some that happens -- usually in the form of people using photographs for an auction without having a license. But, as eBay notes, the "harm" here is pretty minimal, and the idea that eBay should need to purchase an expensive tech filter to weed that out is clearly overkill. Only some kinds of content matter: Perhaps the most frustrating thing was how clearly the Copyright Office and many of the participants who had experience in the legacy content creation world seemed to totally dismiss the idea of new content creators, new kinds of content and new content business models. When lawyer Cathy Gellis mentioned that she was an amateur singer who was able to make some money today, whereas in the past she'd be totally out of luck, the Copyright Office's immediate response was to basically say, "but that doesn't count" for content creators who need to invest in their content creation. Similarly, one musician did the "get off my lawn" style of complaint, saying that all the music today sounds the same, and blamed piracy and Google for that happening (which is incredible, since there is so much more music, including so many different kinds of music, more widely available than ever before). Over and over again people suggested that their own content mattered much more than anyone else's. Too often, panelists absolutely dismissed amateur content or even professional content from new kinds of content creators. People dismissed YouTube as just being about "cat videos." And, again, the Copyright Office seemed to support this idea. In fact, somewhat astoundingly, at one point, the Copyright Office's General Counsel, Jacqueline Charlesworth, tossed out the suggestion that political content should get extra rights, compared to other kinds of content, suggesting that maybe political related videos could get a carve-out such that it would get put back up online faster if it gets a bogus takedown, whereas a "dancing baby" doesn't matter enough. That would clearly violate the First Amendment in determining that some kinds of speech get extra rights compared to other kinds of speech. Service providers shouldn't be allowed to reject takedown notices: This one was pretty incredible (especially combined with the next one). The Copyright Office's General Counsel seemed taken aback by the idea that service providers might choose, of their own volition, to reject DMCA takedown notices. She came back to this point multiple times on day one, suggesting that she was somehow uncomfortable with the idea that a service provider might choose to "adjudicate" whether or not a takedown notice was valid. This was surprising, because the law is pretty clear. The DMCA does not say what a service provider must do. It just describes steps that are necessary to keep the safe harbors. That doesn't mean that if you don't follow the safe harbor requirements you're violating the law, it just means that if you're sued, you don't get an automatic pass on liability. But Charlesworth seemed really disturbed by that idea. And that's problematic, because if you talk to various platforms that receive lots of takedowns, they need to be able to reject some to avoid completely bogus takedowns. Service providers should lose their safe harbors based on a single decision to reject safe harbors over a single takedown: This one could potentially be lumped together with the one above, but it's so crazy that it deserves its own bullet point. Basically, Charlesworth suggested that if a service provider decides to forego the safe harbors for a single item (e.g., refusing to take down content based on an obviously frivolous takedown notice), it might mean that they've removed safe harbors for the entire site. This came up during a direct discussion with a lawyer from Google, in which she interrupted him to ask why a service provider would refuse to take down content, and it was explained that if the takedown was obviously bogus, a site shouldn't take it down. Charlesworth then questioned the lawyer about whether or not the safe harbors only applied to each individual instance, or to the site as a whole, leaving many people in the audience stunned that this was even a question -- including the Google lawyer, who basically said that "obviously" the safe harbor question applied to each individual takedown. Put these two things together and the suggestion is that service providers should never be allowed to question a takedown, no matter how bogus, and if they do, they automatically lose the DMCA's safe harbors entirely. Talk about a recipe for mass censorship. Punishment for false counternotices: Another bizarre idea that came up during these discussions was adding punishment for false counternotices. This is crazy. The system is already totally imbalanced in favor of the person sending the takedown. They get to remove content in most cases based on a single email, and there's no real punishment for false takedowns at all. But after one person complained about false counternotices (a problem that I find it difficult to believe actually exists), suddenly the Copyright Office suggested that maybe there could be additional punishment for bogus counternotices (but without suggesting there should be any punishment for false notices). This makes no sense. There's already punishment for false counternotices, which is that the copyright holder gets to sue the person for up to $150k per infringed work. And, indeed, as various studies have pointed out, the fear of such a lawsuit already greatly chills the willingness of many people to file counternotices at all. The idea of chilling the counternotice process even further, especially without fixing the problem of abusive takedowns, is crazy. Artists struggling to make money have piracy/Google to blame: Over and over again speakers and the Copyright Office kept pointing out this or that content creator who is struggling to make money as proof of "the problem." But that's a massive logical leap. I'm sure that some content creators are losing some earning opportunities because of infringement, but making money as a content creator has always been incredibly difficult. Most musicians don't make very much money. Most movies don't make much money. And it was that way before the internet was even around, and it's still true today. It seems ridiculous to assume that single examples of artists struggling must be a result of piracy. The author TJ Stiles, who is on the board of the Authors Guild, literally claimed that because of infringement he was less creative and wrote less, saying piracy meant he couldn't pay healthcare or his mortgage. Yet, I can just as easily point to new authors or other content creators who are now making a living entirely because of these new platforms that so many people at last week's hearings were angry about. These days, there are new business models using Kickstarter, Patreon, IndieGogo, YouTube, Vine, Instagram, Snapchat, Amazon and many, many more. And many of the artists making money from those platforms wouldn't even have been able to create content in the old world. So those seem to wipe out the claims that it's "piracy" undermining the ability to make money. It seems to suggest that it frequently is a number of other factors from (1) a lack of an audience, (2) an audience that is no longer interested, (3) an inability to go where the audience wants to go these days or (4) an unwillingness to adapt to new business models. One filmmaker even claimed that he didn't think the law should allow "new media" business models to upend "old media" business models. That's an interesting viewpoint, given that under that thinking, film shouldn't have been allowed, since it undermined the market for theater. Public comments are a "denial of service" attack: As mentioned earlier, it was somewhat distressing how frequently the public was ignored in all of this. The Copyright Office was clearly not entirely happy with the fact that Fight for the Future and the YouTube creators (yes, actual content creators!) urged the public to file comments with the Copyright Office on the problems with bogus takedowns. But perhaps the most ridiculous comment came from (known Techdirt hater) Jonathan Taplin, who suggested that because the Copyright Office's servers couldn't handle the ~87,000 inbound comments, it might be considered a "denial of service attack." Really? Helping the public to comment is a denial of service attack? On one of the panels that I was on, I raised the issue of the public being left out of all of this, and the focus on tech v. content -- neither of which might have the public's best interests in mind. The Copyright Office asked about how to better engage the public, and I suggested that encouraging more public comments, rather than dismissing them or labeling them "zombies" or a denial of service attack, might be a good start. Gov't-supported "Voluntary measures" as a panacea: Repeatedly this was one area where the Copyright Office kept claiming "we have some agreement!" where both tech and the legacy content players kept insisting that "voluntary measures" between these parties could solve many of the issues. Representatives from Microsoft and Google excitedly talked up all the great "voluntary measures" that were going on in working with the content industries to come up with solutions that went beyond what the law required. Others talked up various efforts to stop ad networks or payment processors from working with "pirate sites." But, again, this seems dangerous to me. Voluntary measures between two giant industries leave the public and the public interest out of the discussion. Attacking "pirate" sites sounds great until you ask, "how is that list made?" Considering that the Internet Archive, personal blogs and even 50 Cent's personal website have been called pirate sites in previous attempts to build such lists, it's not quite as obvious as some would have you believe. On top of that, lots of brilliant innovations look like piracy when they're first created, but later turn out to be revolutionary. As I noted in my 2-minute soundbite opportunity, radio, the photocopier, cable TV, the VCR, the DVR, the mp3 player and online video were all decried as nothing but piracy tools when they first came out. And yet all of them turned into significant business opportunities for content creators. Cutting those off in the early days through the collusion of "voluntary measures" could stifle all sorts of powerful innovations that actually would help content creators make more money and reach a wider audience. The Lumen Database as a rogue site: One filmmaker was particularly concerned about the fact that Lumen Database (formerly known as ChillingEffects.org) archived DMCA takedown notices that were forwarded to the site. The claim was that people seeking infringing versions of content would search through Lumen to find the URLs listed for takedown. Someone else pointed out that there's no evidence that this is happening at any sense of scale, but someone from the Copyright Office substituted anecdotes for data, and claimed, "well, it's come up a lot." The suggestion was that perhaps Lumen should be required to redact the details of takedowns, and that only certified researchers should be allowed to access the full corpus. This is troubling on any number of levels. One of the things that was clear throughout the discussion is that we need more data and more research into what's working and what's not working. And you don't do that by locking up important data and then suggesting that only a special class of people should be allowed in. Though, I guess it fits with the idea that only some kinds of content creators matter. Apparently only some kinds of researchers might matter as well. Technology is magic: Over and over again, folks who actually understood the technology pointed out why technology is no magic bullet. You can't automate understanding what is and what is not infringing. But someone from a lobbying group for the legacy copyright players pulled out the "you're all so smart, nerd harder" card by saying that if Silicon Valley can build a self-driving car, surely it can build a technology that can determine what is and what is not fair use. This has echoes of the debate over backdooring encryption, which is another of these "nerd harder" situations, in which non-technologists assume that technology is magic and doing x and doing y are somehow equivalent. All I can say in response to that is obligatory xkcd: All in all, the two days of hearings was somewhat frustrating. I had hoped that it might be a way to actually have some productive discussions that actually focused on what was best for "promoting the progress." Instead, it seemed to involve the Copyright Office coming in with preconceived notions, mainly focused on how copyright law can be changed to better prop up legacy business models, at the expense of the public and innovation -- including all of the new creators who rely on that innovation.Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
While larger cable companies have the scale and leverage necessary to negotiate better programming, smaller cable companies are finding themselves facing tighter and tighter margins as broadcasters push for relentless programming increases. As such, many have begun candidly talking about exiting the pay TV sector entirely and focusing on broadband service only. When approached by broadcasters like Viacom about major hikes, some cable operators have simply culled the channels from their lineup permanently and refused to look back. Not too surprisingly, the narratives being told by these smaller cable companies vary differently from larger cable operators, many of which deny that pay TV is caught in an unsustainable death spiral thanks in part to relentless broadcaster demands. CableOne CEO Thomas Might, for example, candidly declared last week that the traditional cable sector is a "tragedy of the commons" that's going to end badly for everyone involved:"The actions of content owners is easily explained by the concept of tragedy of the commons. Once one programmer started taking double-digit rate increases, even in the face of falling ratings, each of the other programming groups felt compelled to do the same. The reason the theory is named "tragedy" is because it is guaranteed to end badly for all in the long run. It appears that long run is finally arriving."And again, while large cable operators and broadcasters have denied cord cutting's very existence -- and downplay "cord shaving" (reducing your cable packages or opting for a skinny bundle) at every opportunity, Might states the obvious in noting the kids just aren't watching regular TV anymore:"Linear video ratings are plummeting for several reasons. The lower end of the market can no longer afford the big bundle; the number of disruptive OTT technologies and vendors are now multiplying rapidly; and the millennial generation has very limited interest in traditional TV viewing. These patterns will inevitably bring an end to the ubiquitous fat bundle, but only slowly and painfully."As we've long noted, cable operators could pretty easily defeat cord cutting by competing on price and value. But instead their solution so far has been to raise rates on broadband and TV like it's going out of style, to impose usage caps to punish cord cutting, and to offer "skinny bundle" packages that give the illusion of value, but saddle users with misleading fees post sale. Only when cord cutting shifts from a trickle to a steady roar will most major cable executives finally change tack, at which point they'll be surrounded by an ocean of hungrier, leaner companies all doing what cable refused to do for a generation: offer a cheaper, more flexible pay TV product.Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
Beyond James Comey, there are still a few law enforcement officials beating the anti-encryption drum. Manhattan DA Cyrus Vance is one of those. He's been joined in this fight by some like-minded district attorneys from the other coast, seeing as New York and California both have anti-encryption bills currently working their way through local legislatures. Vance, along with Los Angeles County DA Jackie Lacey and San Diego County DA Bonnie Dumanis, penned an op-ed against encryption for the LA Times. In it, they argue that tech companies have set them up as "gatekeepers" of communications and data, which they believe law enforcement should always have access to, no matter what. DA Dumanis goes even further in a press release issued by her office. Tech companies aren't just gatekeepers standing between law enforcement and data. They're "gatekeepers of justice," apparently standing between victims of crime and punishment of wrongdoers. The EFF's Dave Maass has fired back, via a post of the Voice of San Diego, pointing out that Dumanis especially shouldn't be inserting herself into the encryption debate -- not with her general disdain for the security of her constituents. It opens with this: The last person San Diego should trust with their computers and smartphones is District Attorney Bonnie Dumanis. And goes on to clearly articulate why Dumanis has no business attempting to legislate computer security. Dumanis spent public money acquiring and pushing a horrendously insecure piece of "parental monitoring" software. In 2012, Dumanis spent $25,000 in public money on 5,000 copies of a piece of “parental monitoring” software called ComputerCop. This CD-ROM, which was distributed to families throughout the county for free, included a video from Dumanis promoting the program as the “first step” in protecting your children online. This first step, however, involved parents installing keylogger software on their home computers. This type of technology is a favorite tool of malicious hackers, since it captures everything a user types, including personal information such as passwords and credit card numbers. Not only did ComputerCop store keylogs in an unencrypted file on the person’s computer, but it also transmitted some of that information over unsecured connections to a mysterious third-party server. Two years later, Dumanis finally pulled the plug on the publicly-funded program, admitting the monitoring software was faulty and telling parents to disable the insecure keylogging function. Dumanis was hardly the only DA to recommend this terrible software, but she's one of the few who's stuck her head above the encryption parapet to offer her support of the Feinstein-Burr anti-encryption bill. But that's not all. Dumanis and her office won't even secure their own website. The district attorney’s website fails to use HTTPS, the protocol that has become the industry standard for secure browsing online. This means that residents, including crime victims, whistleblowers and witnesses, cannot visit her site with confidence that their browsing won’t be intercepted or manipulated by third parties. Dumanis -- like Vance, Comey, and others -- would rather sacrifice the safety of the public for a few more criminal prosecutions. The "greater good" apparently means nothing when a very small percentage of cases might involve encrypted communications or devices. Law enforcement has never had more access to communications and data that it does now. In the past, files were burned, papers were shredded, people passed notes and spoke in person -- all of which rendered these inaccessible to law enforcement. Now that these files and communications are conveniently stored en masse on cellphones and personal computers does not mean the government is somehow entitled to 100% access. A warrant that runs into encryption is a small price to pay for the security of millions of cellphone users. Despite maintaining the narrative that criminals are moving toward encrypted platforms, law enforcement reps and officials have yet to deliver any evidence that this is so widespread that backdooring or banning encryption is the only option. And the loudest law enforcement voices protesting tech companies and their "gates" are often those who care the least about protecting innocent people from criminals. Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
We've talked about cheating in academia in the past, usually revolving around whether or not what used to be called cheating might be better thought of as collaboration. Beyond that, we've also talked about some of the strategies used to combat the modernity of "cheating", which has included the monitoring of students online activities to make sure they weren't engaged in cheating behavior. Well, the nation of Iraq doesn't have time for all of this monitoring and sleuthing. When its students have their standardized tests, they simply shut the damned internet off completely. For a few hours each morning, the Iraqi government keeps cutting off internet access—to keep students from cheating on their end-of-year exams. As reported by DYN research, which tracks internet blackouts around the world, the country’s access went almost entirely dead between 5 a.m. and 8 a.m. in the morning on Saturday, Sunday and again on Monday. And this isn't the first time the Iraqi government has gone about things in this way. Last year, they pulled the same lever to shut down internet access to the country, with the same explanation that it was combatting a scourge of question and answer sharing occuring online. What's interesting about this is that the real problem appears to be the teachers, not the students. Teachers in Iraq are apparently regularly bribed by students to share the questions and answers to tests and that those leaks are then spread across the internet for other Iraqi students to see. “What happens usually is that some teachers would be giving the exams questions to students who pay money, then [those] students would sell online questions all over country,” one Iraqi, who requested his name not be used in a story, told Vocativ. “Between 5 a.m. to 8 a.m. [is when teachers finalize questions] so this is the time when teachers [who have been paid off would] give questions to students by Facebook or Viber or Whatsapp and so on.” Now, perhaps this move is effective in its aims. I don't know, since students looking to cheat haven't exactly always required the internet to do so. Still, even if it were, there must be another more subtle yet effective way to combat this cheating scourge. Perhaps one that doesn't interrupt internet access for, oh I don't know, everyone else in the entire country. Because the effects of this blackout aren't exactly limited to students. Human rights groups were outraged at the outage. “We see this, especially in such a destabilized country as Iraq, as really terrible. It’s a lot of people under a media and communications blackout,” Deji Olukotun, Senior Global Advocacy Manager at the internet freedom nonprofit, told Vocativ. Come on guys, figure this out. Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
The vast, vast majority of time when we point to new academic research, we end up linking to the research hosted on SSRN, which stands for the Social Science Research Network. SSRN has been around for a long, long time, and it's basically the go-to place to post research in the legal and economics worlds -- the two research areas we most frequently write about. At this moment, I have about 10 SSRN tabs open on interesting papers that I hope to write about at some point. Technically SSRN is what's known as a "preprint server," where academics can share papers before peer review is completed and the final papers end up in a locked up, paywalled journal. The kind of paywall run by a giant company like Elsevier. So it's been quite distressing to many this morning to find out that Elsevier has now purchased SSRN. Everyone involved, of course, insists that "nothing will change" and that Elsevier will leave SSRN working as before, but perhaps with some more resources behind it (and, sure, SSRN could use some updates and upgrades). But Elsevier has such a long history of incredibly bad behavior that it's right to be concerned. Elsevier is not just a copyright maximalist (just last week at a hearing I attended involving the Copyright Office, Elsevier advocated for much more powerful takedown powers in copyright). It's not just suing those who make it easier to access academic info. It's not just charging insane amounts for journals. It also has a history of creating fake peer reviewed journals to help pharmaceutical companies make their drugs look better. And it also has a history of lobbying heavily against open access, while similarly charging for open access research despite knowing it's not supposed to do this. So, quite obviously, there is reason to be concerned that Elsevier may make some "changes" to SSRN that make it a lot less valuable for the sharing of academic research and papers in the near future. Some are already suggesting it's time to build a new service (either as a nonprofit or a trust) to take over what SSRN was doing in the past. Or, alternatively, there's talk of getting other preprint servers, like the famed arXiv to start handling social sciences research as well. Another alternative might be just to see if the Internet Archive is willing to take on this kind of project itself. Once again, though, it shows just how messed up copyright has become. Copyright is not the reason any of these papers gets written, and now copyright is seen as a weapon against the sharing of knowledge. When copyright was first put in place in the US, we were told it was to encourage the sharing of such educational resources. That may have been a lie at the time (it was designed as a tool for publishers), but if we're going to have a copyright system that claims to be about promoting science, at the very least, we should be able to live in a world where it really is easy to share academic research without fear that a copyright claim is going to destroy everything.Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
People have been looking up into the sky for centuries, wondering what's out there and if we're alone on this world. Astronomers, more recently, have been looking into deep space with some relatively high-tech equipment -- finding some strangely inexplicable phenomena (that could be alien megastructures?!) and still wondering if we're alone in the universe. We may never know for sure if intelligent life exists anywhere else, but it doesn't hurt to look, does it? The Breakthrough Prize Foundation is aiming to shoot lasers at a light-propelled nanocraft that could reach Alpha Centauri in a few decades (instead of millennia). This lightsail spacecraft would have a mass of just a few grams, so it could be accelerated to speeds of 100 million miles per hour -- much faster than any existing spacecraft we've ever built (like Voyager I zipping away at about 38,500 mph). [url] The Kepler space telescope has gotten plenty of headlines for finding thousands of exoplanets, but the far lesser-known TRAPPIST (TRAnsiting Planets and PlanetesImals Small Telescope) in Belgium is also finding some exoplanets, too. The TRAPPIST telescope is stuck on the ground (unlike KST), but it's looking at a few dozen ultracool dwarf stars and has found 3 planets orbiting a star that's just 0.05 percent as bright as our Sun. [url] Kepler 36b and Kepler 36c are two exoplanets orbiting the same star which could possibly harbor some kind of microbial life (but probably not). Still, it's an interesting question whether or not life -- if it exists on either Kepler 36b or 36c -- could be transferred to its neighboring planet. (Though maybe we should focus on looking at Venus and Mars first....) [url] After you've finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
There's no greater sin than being wrong on the internet. But can you build a federal case out of it? Thomas Robins tried to do exactly that by filing a potential class action lawsuit against Spokeo ("the people search engine") for posting incorrect information about him to its website. The district court tossed his case for lack of standing, only to see it revived by the Ninth Circuit Appeals Court, which found that Robins could potentially demonstrate that Spokeo's incorrect information may have violated Robins' personal statutory rights. This eliminated the class action option, but granted him the standing to pursue this on his own behalf. Now, Robins has reached the end of the line and there's not much there for him. The Supreme Court has reversed the Appeals Court's judgment in favor of Robin's and booted the case back to determine whether Robins can actually be granted standing, considering that to date he has yet to show he has suffered any "actual injury" from Spokeo's inaccurate information. This analysis was incomplete. As we have explained in our prior opinions, the injury-in-fact requirement requires a plaintiff to allege an injury that is both “concrete and particularized.” Friends of the Earth, Inc. v. Laidlaw Environmental Services (TOC), Inc., 528 U. S. 167, 180– 181 (2000) (emphasis added). The Ninth Circuit’s analysis focused on the second characteristic (particularity), but it overlooked the first (concreteness). We therefore vacate the decision below and remand for the Ninth Circuit to consider both aspects of the injury-in-fact requirement. So, the decision is mostly procedural and doesn't address any questions concerning Spokeo's gathering and dissemination of possibly incorrect information. As alleged by Robins, Spokeo basically has no idea who he really is. His profile, he asserts, states that he is married, has children, is in his 50’s, has a job, is relatively affluent, and holds a graduate degree. App. 14. According to Robins’ complaint, all of this information is incorrect. If all of this is incorrect, then one might think Robins should thank Spokeo for putting a more positive spin on his life. But Robins wasn't happy with the bogus results and claims this entirely positive (if entirely false) profile has cost him employment opportunities. The problem with his allegations, though, is that Robins has yet to alleged anything more than a violation of the Fair Credit Reporting Act (FCRA) by Spokeo. And while violations can result in $100-1000/screwup payouts to claimants for inaccuracies, the Supreme Court expect Robins to produce more than allegations if he hopes to collect from Spokeo. As the Court notes, plenty of false information can be circulated without ever generating "concrete, particularized" harm. Robins cannot satisfy the demands of Article III by alleging a bare procedural violation. A violation of one of the FCRA’s procedural requirements may result in no harm. For example, even if a consumer reporting agency fails to provide the required notice to a user of the agency’s consumer information, that information regardless may be entirely accurate. In addition, not all inaccuracies cause harm or present any material risk of harm. An example that comes readily to mind is an incorrect zip code. It is difficult to imagine how the dissemination of an incorrect zip code, without more, could work any concrete harm. Without more information on the alleged harm, Robins has no standing. Robins may be able to produce this, but he'll be doing it in front of the Ninth Circuit Appeals Court. The dissenting opinion, however, finds Robins has already satisfied this requirement by stating the misinformation's negative impact on his ability to obtain employment. Because of the misinformation, Robins stated, he encountered “[imminent and ongoing] actual harm to [his] employment prospects.” Ibid. As Robins elaborated on brief, Spokeo’s report made him appear overqualified for jobs he might have gained, expectant of a higher salary than employers would be willing to pay, and less mobile because of family responsibilities. But as it stands, Robins hasn't produced enough evidence to satisfy what the Supreme Court is looking for in terms of harm, and it's leaving that up to the court that revived the case. "Jobs he might have gained" doesn't sound very "concrete." There's no doubt information gathered without sufficient vetting will inevitably produce misleading or wholly incorrect "profiles." Spokeo's bulk collection seems to falling short of its stated goal of being a "people search engine," at least in Robins' case, but if he wants to pursue this further for himself -- much less as the leading representative for class of similarly-harmed individuals, he'll at least need to show he has actually been harmed by the misinformation, rather than theoretically harmed by his perception of employers' responses to the false data. Permalink | Comments | Email This Story

Read More...
posted 9 days ago on techdirt
We all know the internet is for porn, right? But the implication in that age-old internet commandment is that it's for porn and nothing else. But that's not true! The internet is also for cats, for business-ing, for Techdirt, and for political messages. But what you really shouldn't do is mix any of those formers with the latter, which it appears is what congressional candidate Mike Webb did on his Facebook page. What you might miss if you're not really paying attention is the open tabs Webb included in his screenshot: "Ivone sexy amateur" and "Layla Rivera tight booty." Some might point out that those tabs could be anything, which would be both silly and easily refuted. Both are porn. And, hey, maybe they are good porn. After all, it's not like the revelation that an aspiring politician enjoys nudity should be shocking to us. Still, this is usually when said politician would go into run-and-hide mode, deleting posts and claiming hacks and whatnot. To his credit, Webb is not doing any of this. He actually re-posted the image in an updated post. However, he explains away the open porn tabs in terms that essentially amount to him being the Sherlock of porn-related malware. The explanation is 2,000 words long. It does not make a huge amount of sense, but apparently blames the pornographic images on an experiment Webb was performing to see whether or not someone was using malware embedded on porn sites to infect electoral candidates with malware that would prevent them from filing their candidacy before the deadline. Maybe. It’s honestly hard to parse. Webb writes, in part: “Curious by nature, I wanted to test the suggestion that somehow, lurking out in the pornographic world there is some evil operator waiting for the one in a gazillion chance that a candidate for federal office would go to that particular website and thereby be infected with a virus that would cause his or her FEC [federal election commission] data file to crash the FECfile application each time that it was loaded on the day of the filing deadline, as well as impact other critical campaign systems. Sure, okay. You weren't jacking it, you were testing out a theory of malware delivered specifically to congressional candidates through weaponized porn videos. I'll give Webb credit: my head is spinning after trying to put the logic together that would make any of his explanation possible. It's probably a wasted effort. Mr. Webb, just mind your browser tabs next time, mmkay? Permalink | Comments | Email This Story

Read More...
posted 10 days ago on techdirt
Consumers looking for an electric car have several options to consider, but the buzz and excitement around Tesla continues to dwarf everything else. It's hardly unfounded, but the scale of the company's success is staggering, and there's no single reason for it. This week, we discuss that simple question: just why is Tesla so successful? Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt. Permalink | Comments | Email This Story

Read More...
posted 10 days ago on techdirt
As hardware and software advance, so facial recognition becomes more accurate and more attractive as a potential solution to various problems. Techdirt first wrote about this area back in 2012, when Facebook had just started experimenting with facial recognition (now we're at the inevitable lawsuit stage). Since then, we've reported on an increasing number of organizations exploring the use of facial recognition, including the FBI, the NSA, Boston police and even the church. But all of those pale in comparison to what is happening in Russia, reported here by the Guardian: FindFace, launched two months ago and currently taking Russia by storm, allows users to photograph people in a crowd and work out their identities, with 70% reliability. It works by comparing photographs to profile pictures on Vkontakte, a social network popular in Russia and the former Soviet Union, with more than 200 million accounts. In future, the designers imagine a world where people walking past you on the street could find your social network profile by sneaking a photograph of you, and shops, advertisers and the police could pick your face out of crowds and track you down via social networks. One of FindFace's founders, Alexander Kabakov, points out the service could have a big impact on dating: "If you see someone you like, you can photograph them, find their identity, and then send them a friend request." The interaction doesn't always have to involve the rather creepy opening gambit of clandestine street photography, he added: "It also looks for similar people. So you could just upload a photo of a movie star you like, or your ex, and then find 10 girls who look similar to her and send them messages." Definitely not creepy at all. Of course, a 70% hit rate isn't that good: perhaps FindFace isn't really such a threat to public anonymity. The trouble is, the Guardian article reports that the company has performed three million searches on its database of around a billion photographs using just four common-or-garden servers. It's easy to imagine what might be achieved with some serious hardware upgrades, along with tweaks to the software, or with access to even bigger, more complete databases. For example government ones: according to the Guardian, FindFace's founders think the big money will come from selling their system to "law enforcement and retail." Although they've not yet been contacted by Russia's FSB security agency, they say they'd be happy to listen to offers from them. Perhaps comforted by the thought of all that future business coming his way, Kabakov is philosophical about the social implications of his company's technology: "In today’s world we are surrounded by gadgets. Our phones, televisions, fridges, everything around us is sending real-time information about us. Already we have full data on people's movements, their interests and so on. A person should understand that in the modern world he is under the spotlight of technology. You just have to live with that." That may well be true. But the question is, are we ready to do so? Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 10 days ago on techdirt
Back in February, the judge presiding over the FBI's case against Jay Michaud ordered the agency to turn over information on the hacking tool it used to unmask Tor users who visited a seized child porn site. The FBI further solidified its status as a law unto itself by responding that it would not comply with the court's order, no matter what. Unfortunately, we won't be seeing any FBI officials tossed into jail cells indefinitely for contempt of court charges. The judge in that case has reversed course, as Motherboard reports. The government's motion has been granted, and the FBI does not have to provide the exploit code to the defense as previously ordered. That means that the defense in the case will probably be unable to examine how the evidence against their client was collected in the first place. It is not totally clear why Judge Robert J. Bryan changed his mind. On Thursday, the government and Bryan held a private meeting, where the government presented its reasons for nondisclosure of the Tor Browser exploit. The judge apparently believes the defense should still be able to examine the code but apparently can't be bothered with ensuring this will happen. Despite backtracking somewhat, Bryan still thinks the defense has a reason to see that code, according to audio of the public section of Thursday’s hearing provided by activist Phil Mocek. Of course, whether the FBI decides to then provide it is another matter. Given the FBI's earlier promise to withhold the details of the NIT despite being ordered to disclose them, I'd say there's about a 0% chance of the FBI voluntarily turning this information over to the defense. Right now, the agency is working overtime just trying to keep the evidence it obtained with its hacking tool from being tossed out of three other courts. It's also facing the prospect that third-party interlopers like Mozilla may still result in it having to release these details to someone outside of its own offices. At this point, hardly anything's going the FBI's way, so it will take whatever it can get, even if it's only temporary relief. Permalink | Comments | Email This Story

Read More...
posted 10 days ago on techdirt
Product Management is a growing field. Get a hands-on introduction to it with the Complete Product Management Bundle on sale for $45. You'll learn how to think like a product manager and take something from ideation to market research to wireframing to prototyping to user stories, and how to use popular tools like Popplet, Axure, and Pivotal Tracker. The 7 courses even cover how to get into the Product Management field and how to prepare for a job interview. With over 60 hours of instruction, you'll be way on your way towards discovering a new career path. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 10 days ago on techdirt
Verizon's modus operandi has been fairly well established by now: convince state or local leaders to dole out millions in tax breaks and subsidies -- in exchange for fiber that's either only partially delivered, or not delivered at all. Given this story has repeated itself in New Jersey, Massachusetts, New York City and countless other locations, there's now a parade of communities asking somebody, anybody, to actually hold Verizon's feet to the fire. Given Verizon's political power (especially on the state level) those calls go unheeded, with Verizon lawyers consistently able to wiggle around attempts to hold the telco to account. In Pennsylvania, the story is much the same as elsewhere. Verizon was able to convince state leaders in the '90s to dole out billions in handouts for state-wide symmetrical 45 Mbps fiber broadband. But a decade later when people finally noticed fiber was nowhere to be found, Verizon managed to convince state leaders to effectively forget about the obligation entirely. Fast forward another decade and, after striking a 2009 franchise deal with the city of Philadelphia (again promising full city deployment of its FiOS fiber broadband service) you'll be shocked to discover what happened:"Philadelphia government officials are investigating whether Verizon has met an obligation to bring FiOS service to all residents of the city. Verizon obtained a cable franchise agreement from the city in February 2009, and the deadline to wire up all of Philadelphia passed on February 26 of this year...Philadelphia seems skeptical about whether Verizon actually met its obligation, but it is still looking for proof. The city set up a webpage asking residents to fill out a form to "tell us whether you have tried to order Verizon service but have been told by the company that service is not yet available in your neighborhood." Traditionally, ISPs can get away with this not only because they effectively own state legislatures, but because nobody in any part of government actually bothers to audit company deployment promises. What passes as an audit generally involves the ISP submitting its own claims that regulators fail to fact check. That's why Philadelphia leaders are being forced to crowdsource whether or not Verizon met its promises. Meanwhile, Verizon tells Philly city council leaders that they're unable to offer statistics right now on their FiOS deployment because, uh, unions:"Philadelphia should learn from New York's experience, Philadelphia City Council member Bobby Henon said during a hearing two weeks ago. “We do not want this to happen in Philadelphia,” Henon said, according to an article published by Technical.ly Philly. Henon wanted good data, but Verizon said it couldn't provide it yet because of the ongoing Verizon workers' strike. Verizon also said, “Any claims made at the hearing that we haven’t completed our obligations of our franchise agreement are untrue," according to the article." At this point there's plenty of blame to go around for the fact that history just keeps repeating itself without getting fixed. For one thing, just like in New York City, city leaders keep signing sweetheart deals with endless loopholes designed by Verizon lawyers, then acting shocked when Verizon actually uses those loopholes. For example, several city agreements let Verizon simply pass a set total of homes with fiber (anywhere up to several blocks away), instead of technically "serving" them. Other contracts contain language letting Verizon dodge or buy their way out of deployment obligations if certain TV uptake metrics aren't met. These are clauses cities have been warned repeatedly about but choose to ignore. Bad deals are struck behind closed doors by one administration, with subsequent city leaders left holding the bag. By that point Verizon can successfully argue that they technically met the terms of such deals, because the terms of such deals were designed to be malleable. Granted that doesn't excuse Verizon's proclivity for ripping off taxpayers on an industrial scale, but this dance of dysfunction wouldn't be quite so embarrassingly uncoordinated if cities would stop signing deals that promise the moon, but deliver stinky cheese.Permalink | Comments | Email This Story

Read More...
posted 10 days ago on techdirt
Three years ago, we wrote about a crazy story in which the Union of Jewish French Students (UEJF) was suing Twitter for $50 million, claiming that the fact that an anti-semitic hashtag started trendng violated some sort of anti-hate speech law in France. Twitter, somewhat ridiculously, actually agreed to remove the tweets in question, saying they were offensive. Even after that, UEJF demanded that Twitter also reveal the identities of everyone who tweeted the hashtag... and won (not the money, but Twitter was told to hand over the user info)! Yeah, France is not a big supporter of free speech, we get it, but this is still ridiculous. At the time, Twitter claimed that the whole thing was really a publicity stunt for UEJF: "We've been in continual discussions with UEJF," a Twitter spokesperson told CNET. "As yesterday's new filing shows, they are sadly more interested in grandstanding than taking the proper international legal path for this data." Apparently, it's time to ramp up the grandstanding again, as reports are now spreading that the same group has now sued Twitter yet again, and once again for $50 million, and (somewhat incredibly) in all of the tech press coverage I'm reading of this, none seem to mention the lawsuit from three years ago. Of course, this time it's not just Twitter, but YouTube and Facebook that are also being sued for $50 million. And it's not over a trending hashtag, but rather just a bunch of obnoxious tweets: In this "first mass test of social networks," the groups uncovered 586 instances of content that was "racist, anti-Semitic, denied the Holocaust, homophobic (or) defended terrorism or crimes against humanity," the joint statement said. Only a fraction of these postings were deleted by the host organisations within a "reasonable time," as required under a 2004 French law: four percent on Twitter, seven percent on YouTube and 34 percent on Facebook. Look, there are a lot of terrible people who say terrible stuff on the internet. That's kind of a thing that happens on the internet. And, no, it's not very nice. But it takes an incredible leap in logic to take that fact and say... "Hey, let's sue the internet companies for this." In the US, of course, such a lawsuit would be immediately laughed out of court for infringing on the First Amendment. You can say ignorant stuff in America and it won't lead to $50 million dollar lawsuits against the technology you used to say your ignorant stuff. Now, as we've discussed in the past, American companies should be protected from these kinds of ridiculous lawsuits by the SPEECH Act, which rejects foreign judgments that wouldn't survive First Amendment scrutiny in the US. But, of course, that won't do much good for internet giants like Facebook, Twitter and YouTube -- all of whom have a strong presence in France, including employees. The courts can still target all of that. But, really, UEJF is being completely idiotic here: "It's a mystery whether moderating teams in social media are actually working," said Sacha Reingewirtz, president of the UEJF. Dominique Sopo, head of SOS-Racisme, said the social media giants were hypocritical. "These platforms seem more shocked about content with bare breasts, which is swiftly censored, than about incitement to hatred," Sopo said. "Our legal step aims at getting the authorities to apply the law so that these organisation submit to it in full." First of all, the quote from Reingewirtz is ridiculous. It's something someone says when they have absolutely no sense of the sheer scale of what these companies deal with. They don't scan every new post or video because that's simply impossible. And while Sopo at least has a point about Facebook's prude sensibilities, that doesn't necessary apply to the other platforms... and also, is a very different thing. And, really, if you're trying to get platforms to broadly censor a class of content, it seems like a rather strange way to go about it by then mocking the very same companies for blocking a class of content that you don't happen to find offensive. Who knows where this ends up, though given that France is the same country that once declared Yahoo's CEO to be a war criminal, because someone used Yahoo's auction service (yes, children, Yahoo once competed directly with eBay in auctions) to auction off some Nazi memorabilia, it may not end well for those companies. The whole thing is ridiculous though. Even if you think saying stupid, ignorant, racist, homophobic and anti-semitic things should be against the law, at the very least focus on the people who actually said that stuff, rather than the technologies people used to say them.Permalink | Comments | Email This Story

Read More...