posted 15 days ago on techdirt
We've seen some awful copyright rulings over the years, but this latest one from Judge Rodney Gilstrap in Texas* is a real corker. First covered by Eriq Gardner, over at the Hollywood Reporter, the story is a complex one involving TV personality Dr. Phil and accusations of him imprisoning a producer who worked for him. What could that possibly have to do with copyright? Well, read on... * If you recognize the name, it's because for the past few years, he's handled a huge number of patent cases. Indeed, last year alone, he (yes, just this one judge) handled 20% of all patent cases in the US Gardner sums up the background to the lawsuit nicely: In 2015, television personality Dr. Phil McGraw was sued by Leah Rothman, who worked as a segment director on his show for 12 years. She alleges suffering emotional distress and false imprisonment when during a meeting, Dr. Phil locked the door, yelled profanities and threatened employees for supposedly leaking internal information to the press. Before she sued, Rothman attempted to get evidence by accessing a database of videos from the Dr. Phil Show archives and recording on her iPhone a nine-second clip of what happened. All seems perfectly reasonable, right? But Dr. Phil's company, Peteski Productions, registered the copyright on that video clip she took... and sued her for copyright infringement. Yes, really. Because nothing says "I'm contesting the claim that I falsely imprisoned you" like "wait, your evidence that shows me falsely imprisoning you violates my copyright." This should be an open and shut case for a whole host of reasons. But, as a first pass, Rothman's lawyers pointed out that Rothman was clearly protected by fair use. And she is. Except not according to one of the weirdest fair use analyses I've ever seen, courtesy of Judge Gilstrap. It should have been obviously fair use just on the transformative use question -- seeing as she wasn't using the clip for a TV program at all, but as evidence in her own case against Dr. Phil. And, it was just a short clip -- and she wasn't "selling" it. And it wouldn't harm whatever "market" there was for that clip. In short, this should be fair use. Easily. But... nope. So let's go through Judge Gilstrap's fair use four factors analysis. First up -- the purpose and character of the use -- which looks into the nature of the use, whether it was transformative and whether it was for commercial use. Again, to me, this seems quite clear: the purpose had nothing to do with the reason the work was created in the first place, making it obviously transformative. But, Judge Gilstrap wades into swampy waters by claiming that because Rothman obtained the video via "bad faith" it's not. Rothman did not copy to then educate the masses or to further the greater good. She copied to aid her pending lawsuit seeking money damages where she is the only plaintiff and sole potential beneficiary. It is possible that a breach of contract or some other act of bad faith may sometimes be necessary to further an important public interest and therefore such conduct might not always weigh against fair use. However, there is a difference between a defendant who “purloins” a private manuscript or confidential video for personal gain and one who obtains, or even misappropriates, materials of significant public interest.... Here, there is no countervailing public interest because Rothman copied the work at issue “solely” for use in her own lawsuit. This seems wrong on multiple levels. He points mainly to other cases where "bad faith" was an issue, mainly Harper & Row v. Nation Enterprises, in which The Nation published a large excerpt of Gerald Ford's memoir, and it was deemed not to be fair use, in part because of the Nation's own actions and intent. But, in that case, it was clear that what the Nation had done was to publish the excerpt from the book to reveal what was in the book and to undermine the sales of the actual book. In other words, it was in competition with the book. In this case, the intent was to use it as evidence. That has nothing to do with the copyright aspect. Similarly, Judge Gilstrap's weird focus in claiming that she was the "sole potential beneficiary" and there was nothing here to "further an important public interest" also seems... just wrong. Rothman's effort was to expose what she felt was dangerous behavior by a very public figure. How is that not furthering the public interest? Gilstrap notes other cases where "bad faith" harms fair use rulings, but the "bad faith" is always about infringing the copyright. Here, he seems to be arguing that the "bad faith" is... because she wanted to prove something bad about Dr. Phil. I can't even see how that's "bad faith" at all. From there, Judge Gilstrap looks at whether the use is transformative. Again, this should be an easy yes, given that the use was for an entirely different purpose. Hell, Congress even points out that "reproduction of a work in ... [a] judicial proceeding" is an example of what is meant for fair use. Gilstrap even quotes that line... and then twists himself into something of a logical pretzel to not care much about it: While it is true that many courts and commentators have acknowledged the general principle that use of a work in a judicial proceeding may be considered fair use, fewer have addressed whether copying an entire work in preparing a complaint is transformative. For example, Wollersheim, upon which Defendant relies, includes only a cursory discussion of fair use and makes no mention of transformativeness.... Instead, she copied the work to give to her lawyers in her California lawsuit. Even if such a use is transformative, it is not highly transformative. WHAT?!? Isn't the lawsuit she filed based, in part, on the video "commentary and criticism"? This seems to completely twist the meaning of transformativeness to make it nonsense. Onto commercial v. non-commercial. Here, at least, Gilstrap admits that using it in a lawsuit is "non-commercial" though he knocks it as "self-serving." Still, given his arguments before, he weighs the first prong heavily against Rothman, and (as is often the case) the first prong is considered the most important in a fair use ruling. From there, we got to the "nature of the work." Again, it seems clear that this should weigh in Rothman's favor. She took a short video clip to show as evidence of what had been done to her. It's evidence. But Judge Gilstrap says, first, that the "factual v. creative" nature makes it "neutral" -- i.e., favoring neither party, but that because it's "unpublished" that weighs "strongly against fair use." Again, this appears to be misapplying the rules on "unpublished" works. The idea behind that part of the fair use test is to avoid someone revealing something prior to it actually being published and thus undermining the market for it. But that's not the intent at all here. It was unpublished because it wasn't meant for publication and there was no intent to ever publish it. The third factor is "the amount used." And, again, I'm dumbfounded by Gilstrap's reasoning. Remember, this is a 9-second clip that Rothman filmed using her iPhone. 9-seconds of a much larger archive. But... because Dr. Phil's company registered just those 9 seconds after Rothman had filed her lawsuit, Gilstrap argues that it's "the entire work." Neither side disputes that Rothman copied the entire work by recording the nine-second video from The Dr. Phil Show archives. Of course, this also goes completely against other courts -- such as the 2nd Circuit (which admittedly, Gilstrap's court is not in...) -- which have said the appropriate determination on this prong is whether or not the amount of the work was more than necessary for the use. Here, clearly, the nine seconds was what Rothman felt was necessary to prove that, in her belief, Dr. Phil had falsely imprisoned her. Finally, we get to "the effect on the market." At least on this one, no pretzel logic can convince Gilstrap that this video harms the market for that clip and agrees that it "weighs in favor of fair use." This, despite the fact that Dr. Phil's company tried to argue that because "there is an illicit market for videos showing celebrities, such as Dr. McGraw, in a less than favorable light." The judge properly notes that there's no evidence that Rothman was trying to sell the clip into such a market and gives her this one point for fair use. But adding it all up, it's pretty clear where Gilstrap has come down on this one, and it's against fair use. Defendant, by her own admission, took an unpublished work that did not belong to her in violation of confidentiality agreements with Plaintiff “solely” for her personal benefit rather than for commentary, criticism, or public benefit. In light of these circumstances, based on the undisputed facts in the record, and after carefully weighing all the factors discussed above, the Court concludes that summary judgment in favor of Plaintiff is appropriate. This is craziness. The ability to misuse copyright in such a manner should be horrifying. The clear intent here -- even just in registering the copyright, let alone suing over it -- was to burden Rothman for suing Dr. Phil and to try to silence her. This is not the purpose of copyright, and this ruling makes a complete travesty of copyright laws. Even those who tend to support strong copyright should be horrified at this result. Silencing someone trying to prove that her boss falsely imprisoned her is not the purpose of copyright. Hopefully Rothman appeals, and the appeals court smacks this one down. It's an awful ruling in an awful case. Permalink | Comments | Email This Story

Read More...
posted 15 days ago on techdirt
Members of a New York "Black Lives Matter" group are suing the town of Clarkstown and its police department over illegal surveillance. The plaintiffs allege they were placed under surveillance by the Clarkstown PD's Strategic Intelligence Unit (SIU) for a number of reasons, none of which were legal uses of the agency's spy wares. It would seem the lawsuit [PDF] has a good chance of paying off. Allegations of racial profiling and illegally surveilling citizens for their First Amendment activities are backed by the results of investigations and one police official's own admissions. A letter to the US Attorney's office in New York, attached as an exhibit, bolsters the claims made in the BLM lawsuit. In it, Clarkstown town Supervisor George Hoehman details a long list of surveillance violations and other police misconduct. According to the letter [PDF], the SIU began surveillance of members of a play entitled "A Clean Shoot?" performed by a group called "We the People." The surveillance included constant monitoring of their social media profiles and the deployment of geofencing in hopes of capturing anyone else who might be involved with the group and/or the play. The Clarkstown PD shared the information it gathered with the Haverstraw Police Department -- information that included the results of searches of criminal databases. Clarkstown's SIU "warned" Haverstraw the next production of the play would be in September, but noted that participants posed no threat of violence despite harboring "strong opinions." When setting up the geofences, Clarkstown PD lumped BLM and We the People members in with gang members, terrorists, and other more legitimate targets of police surveillance. This continued even though they were told (repeatedly) by the local district attorney's office they should not have Black Lives Matter listed as a surveillance target. In August 2016, the special prosecutor handling the investigation of this surveillance demanded Clarkstown PD hand over communications pertaining to its spying on the two groups. He never received anything. Instead, Police Chief Michael Sullivan deleted all of the data from his issued cellphone. He also allowed Sgt. Steven Cole-Hatcher (head of the SIU) to wipe his own cellphone and to delete possibly-incriminating files from his departmental computer. Sullivan was suspended for fifteen days. Cole-Hatcher was given the opportunity to retire. He's now suing to get his job back and it's his filings that have generated a lot of the evidence needed by BLM to successfully pursue this lawsuit. The letter also alleges things unrelated to the BLM lawsuit, but equally disturbing. Local law enforcement officials have apparently engaged in election interference, surveillance of judges, and monitoring of the town supervisor's social media profiles with the department's surveillance software. Much of what's in the most recent lawsuit retreads allegations made previously. Fortunately, some of those allegations have already been sustained. The lawsuit pleads violations of the First and Fourth Amendment and seeks damages and injunctions against future unlawful surveillance. Chief Sullivan has (unhelpfully) explained BLM and We the People weren't singled out for unlawful surveillance. He stated "many other groups and individuals" were surveilled by the SIU -- a statement he made without clarifying whether these others instances were for legitimate reasons. Adding the latest allegations to those already sustained suggests local law enforcement agencies have more in common with cancerous growths than the "protectors and servants" ideal. Permalink | Comments | Email This Story

Read More...
posted 15 days ago on techdirt
The EFF and ACLU have achieved a victory in an acronym-heavy public records case. The California Supreme Court has ruled the Los Angeles Police Department (LAPD) and Los Angeles Sheriff's Department (LASD) will have to turn over data acquired by their automatic license plate readers (ALPRs). Both entities tried to keep these records from the EFF and ACLU by claiming every single one of the millions of plate records were "investigatory records," exempt from disclosure under California's public records law. This apparently included the millions of "non-hit" records never utilized in any LAPD/LASD investigation. With the plate readers collecting 1.5-2 million records per week, they were basically arguing every driver passing by an ALPR was under investigation. That's not how the state's Supreme Court sees it [PDF]. The "investigatory records" exemption pertains to targeted, ongoing investigations. The public records law cannot be stretched to cover indiscriminate mass surveillance. Accordingly, we hold that real parties’ process of ALPR scanning does not produce records of investigations, because the scans are not conducted as part of a targeted inquiry into any particular crime or crimes. The scans are conducted with an expectation that the vast majority of the data collected will prove irrelevant for law enforcement purposes. We recognize that it may not always be an easy task to identify the line between traditional “investigation” and the sort of “bulk” collection at issue here. But wherever the line may ultimately fall, it is at least clear that real parties’ ALPR process falls on the bulk collection side of it. The court also says the fact that the database of records is routinely searched during current investigations does not make everything in it immune from public records requests. If the law were interpreted this way, all it would take to exempt every one of the millions of plate records from disclosure would be the inclusion of a targeted plate in every batch of ALPR records requested. The law enforcement agencies also claimed any release of the data would harm law enforcement interests, supposedly by giving criminals the info they needed to avoid plate readers. The Supreme Court finds this far less persuasive than the lower court did. The trial court appears to have placed significant weight on the possibility that a criminal could use ALPR data to identify law enforcement patrol patterns. The court did so based on the declaration of LAPD Sergeant Daniel Gomez. In pertinent part, Sergeant Gomez claimed that an individual requesting ALPR data “could use the data to try and identify patterns of a particular vehicle.” (Italics added.) However, Sergeant Gomez also seemed to cast doubt on the likelihood that an individual could do so successfully, explaining that “[u]nlike law enforcement that uses additional departmental resources to validate captured [A]LPR information, a private person would be basing their assumptions solely on the data created by the [A]LPR system . . . .” Nevertheless, we will assume, as the trial court found, that a person could at least roughly infer patrol patterns from a week’s worth of plate scan data. The problem with this aspect of the trial court’s analysis is that, even assuming patrol patterns can be inferred from ALPR data, there is little reason to believe that this possibility points meaningfully toward “a clear overbalance on the side of confidentiality” with respect to all the records sought. (Michaelis, supra, 38 Cal.4th at p. 1071.) For one thing, fixed ALPR scanners are just that—fixed— so concerns about patrol patterns are inapplicable to the data they collect. For another, the record does not appear to indicate that knowledge of where law enforcement officers were during a particular week is a reliable guide to where they will be at some precise moment in the future. The trial court did not find, for example, that real parties conduct law enforcement in the same way that they might operate a bus service—moving from point to point at particular times on particular days, never deviating to attend to other business or emergencies. We are not aware of substantial evidence that would have supported such a finding. The court, however, does find one thing to be concerned with, and it's an issue the LAPD/LASD generally doesn't take into consideration until they're being asked to hand over bulk collection records: privacy. Although we acknowledge that revealing raw ALPR data would be helpful in determining the extent to which ALPR technology threatens privacy, the act of revealing the data would itself jeopardize the privacy of everyone associated with a scanned plate. Given that real parties each conduct more than one million scans per week, this threat to privacy is significant. We therefore conclude that the public interest in preventing such disclosure “clearly outweighs the public interest served by disclosure of” these records. This means the ACLU and EFF will end up with the data they seek, but in anonymized form. It's unclear at this point how this will be anonymized, or if the data, in its abstracted form, will show anything interesting. And there's still more discussion to be had on remand before the ACLU/EFF can actually take possession of the one week of ALPR data they requested. But it's still a significant precedent -- one that narrows the scope of an often-abused public records exemption. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
This week (or last week, I suppose — this post was moved for the long weekend!) our first place comment on the insightful side comes in response to Attorney General Jeff Sessions using Hurricane Harvey as an argument for increased police militarization. An anonymous commenter set things straight: Speaking as a first responder/first responder trainer... ...no. What's needed instead are exactly the kinds of resources that this administration wants to strip out of FEMA: simple, basic essentials that are relatively inexpensive and save lots of lives. Let me give you a timely example. The Cajun Navy, bless their hearts, showed up in force in Houston to do whatever they could to supplement the hopelessly-overwhelmed local, state, and federal personnel. And now some of them are dead, because they didn't have lifejackets (PFDs). A minimal PFD for this kind of work costs about $100, a good one is about $250, a bulk order for several thousand would no doubt drive the price down. No, it's not very cool and sexy and oh-gosh-look-at-the-pretend-soldiers, but it's a basic tool that keeps people alive in situations where they'd otherwise die. A quarter-million dollars worth of PFDs is chump change in comparison with the overall expense -- flying helicopters is REALLY expensive -- but it would yield value far beyond its price. That's just one example. There are a lot of others, including swiftwater rescue training -- something that almost none of the Houston city personnel have had because there's no money for it. But SWR is essential for anyone trying to perform rescues in fast water, particularly in urban areas where there are all kinds of hazards under the surface. Two days of quality SWR instruction costs $250/student and is probably enough to keep them from dying while trying to keep other people from dying. Harvey. Sandy. Katrina. This is the new normal. There will be another one. Soon. And money needs to be spent on basic gear and basic training before one of these turns into a multi-thousand person casualty event. So don't buy the cops AR-15's: buy them PFDs and SWR training. Those are FAR more likely to keep them alive. Meanwhile, a Deputy AG was trotting out a fable to make an absurd point about intellectual property, leading Ninja to win second place on the insightful side by making some critical modifications: Good Lord they're still at these flimsy bullshit claims? I'll fix the little story for him: As a child, I learned a fable about a hen that finds some wheat grains and asks other animals for help in planting them. Nobody is willing to help but some will give it money via crowdfunding campaign, so the hen does the work itself hiring some people to help. At every stage of the process – harvesting the wheat, threshing it, milling it into flour, and baking the flour into bread – it keeps building and people keep financing it as they are interested in the results. But when the work is finished, everyone wants to eat the bread. So the hen makes infinite copies of that bread and everybody is happy, some even pay for some copies! Seeing how successful the bread is, hen decides to go for Bread 2.0 with new pepperoni fillings. The end. Yes, I would download a bread. For editor's choice on the insightful side, we've got a pair of comments from the same pair of posts. First, in response to the post about Sessions, one commenter left some thoughts that included an easy-to-make but unfortunate error in the conflation of two homophones, inspiring TechDescartes to coin a rather good phrase: Spelling matters. Which makes me think of a TLDR for the post: We don't want the military enforcing any ordinance and we don't want the police touching any ordnance. Next, on the post about the Deputy AG's copyright analogy, one commenter tiresomely accused us of just being a bunch of pirates who won't acknowledge creators' rights, leading Stephen T. Stone to spell things out in more detail: Their "intellectual property" rights are given to them by a set of laws that never foresaw modern technology. If copyright could be updated in a way that aligns with the Internet Age, the ease of copying data, and the original intent of copyright, I would likely support it. But it cannot—at least, not while corporations control the writing of such laws—so I cannot. I will stand against black-box code that cedes partial control of my device to someone who does not own it—DRM. I will stand against instant takedowns content if even a small part of it uses someone else’s content under Fair Use guidelines—the DMCA. I will stand against a corporate welfare system that locks up the cultural commons behind a gated wall—the current length of copyright terms. I will stand against any part of copyright law that forgets the law’s original purpose: To strike a bargain between the artist and the general public such that they both benefit from the creation of any given work of art. I will also support, in any way that I can, artists whose work I enjoy. I will ask others to support artists in any way they can. I will ask people to pay the often underpaid and overworked freelance artists more than those artists think they deserve for their time and skill. And I will support an individual artist's right to monetize their work as they see fit. You may judge me by these principles. Doing anything less will show a distinct lack of your own. Over on the funny side, guess what? We've got the same pair of posts again! This time, first place goes to an anonymous commenter with some more thoughts for Jeff Sessions: Hurricane Harvey was a surprise attack by nature Meteorologists predicted that the hurricane would strike farther south and west. Nature staged a sneak attack by hitting Houston instead. The police must be adequately militarized to meet these attacks in kind, and to retaliate against Nature with the full surplus might of the United States military. And in second place, we've got a comment from TechDescartes about our admittedly inconsistent use of typographical emphasis when people like say stupid things about copyright like comparing it to a fable about physical goods: Excellent post... ...but someone needs to go check on Mike. I think he broke the Ctrl+B and Ctrl+I key combos on his computer pressing them so hard. For editor's choice on the funny side, we start out with an anonymous response to a bizarre complaint that we aren't showing enough "journalistic balance" by reporting all the good things about killing net neutrality: Mhm. I also wonder where is journalistic balance in weather reporting? They say tomorrow's gonna be hot and humid in Florida, but what about balance - they need to report about Floridian summer snow and blizzards as well. And finally, we've got a comment from Bruce C. proposing an idea for solving the Facebook moderation problem: Do what NPR did... Maybe Facebook should shut down its comments section. After all, multiple organizations claim it improves their interaction with their users. That's all for this week folks! Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Five Years Ago This week in 2012 started out with us catching up on something that happened late on the previous Friday: the jury in the Apple/Samsung patent trial released a surprise snap verdict that Samsung had infringed. Worryingly, they pretty much admitted that they ignored prior art and other key factors to do so — and the foreman's explanation in interviews showed he didn't really understand prior art to begin with. Of course, the whole thing seemed to be simply demonstrating the viability of Samsung products as an alternative to iPhones and iPads — and as you likely know, just last December, SCOTUS overturned this verdict. Ten Years Ago This week in 2007, the RIAA managed to score a victory in one of its attempts to get a judge to say that "making available" counts as distribution, and immediately began pushing to spread that ruling to other courts. Viacom got meta in its habit of awful YouTube takedowns by taking down someone's video of a Viacom-owned show airing one of his YouTube videos without permission. Congress was trying to get ISPs to be copyright cops and introduce the nightmare of copyright to the fashion industry. And the first iPhone was successfully unlocked, leading AT&T to predictably and pointlessly lash out. Fifteen Years Ago This week in 2002 there were lots of new and emerging things that people were grappling with (though it was not the first freak-out about ultra-violent video games nor would it be the last). There was the realization that becoming suddenly internet famous comes with a cost that not everyone enjoys; there was the attempt to figure out

Read More...
posted 18 days ago on techdirt
Only available until tomorrow! Get your Original Techdirt Logo Gear » It was last week that we celebrated Techdirt's 20th anniversary, and part of that included digging up the very first Techdirt logo... ...and turning it into some limited edition t-shirts, hoodies and stickers! Now it's your last chance to get your hands on this special anniversary gear, as the sale ends tomorrow, Sunday September 3rd. So if you want one, hurry up and order now! And don't forget to check out our store on teespring for other Techdirt gear. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
We have often criticized the Patent Office for issuing broad software patents that cover obvious processes. Instead of promoting innovation in software, the patent system places landmines for developers who wish to use basic and fundamental tools. This month's stupid patent, which covers user permissions for mobile applications, is a classic example.  On August 29, 2017, the Patent Office issued U.S. Patent No. 9,747,468 (the '468 patent) to JP Morgan Chase Bank, titled "System and Method for Communication Among Mobile Applications." The patent covers the simple idea of a user giving a mobile application permission to communicate with another application. This idea was obvious when JP Morgan applied for the patent in June 2013. Even worse, it had already been implemented by numerous mobile applications. The Patent Office handed out a broad software monopoly while ignoring both common sense and the real world. The full text of Claim 1 of the '468 patent is as follows: A method for a first mobile application and a second mobile application on a mobile device to share information, comprising: the first mobile application executed by a computer processor on a mobile device determining that the second mobile application is present on the mobile device; receiving, from a user, permission for the first mobile application to access data from the second mobile application; the first mobile application executed by the computer processor requesting data from the second mobile application; and the first mobile application receiving the requested data from the second mobile application. That's it. The claim simply covers having an app check to see if another app is on the phone, getting the user's permission to access data from the second app, then accessing that data.  The '468 patent goes out of its way to make clear that this supposed invention can be practiced on any kind of mobile device. The specification helpfully explains that "the invention or portions of the system of the invention may be in the form of a 'processing machine,' such as a general purpose computer, for example." The patent also emphasizes that the invention can be practiced on any kind of mobile operating system and using applications written in any programming language.  How was such a broad and obvious idea allowed to be patented? As we have explained many times before, the Patent Office seems to operate in an alternate universe where the only evidence of the state of the art in software is found in patents. Indeed, the examiner considered only patents and patent applications when reviewing JP Morgan's application. It's no wonder the office gets it so wrong. What would the examiner have found if he had looked beyond patents? It's true that in mid-2013, when the application was originally filed, mobile systems generally asked for permissions up front when installing applications rather than interposing more fine-grained requests. But having more specific requests was a straightforward security and user-interface decision, not an invention. Structures for inter-app communication and permissions had been discussed for years (such as here, here, and here). No person working in application development in 2013 would have looked at Claim 1 of the '468 patent and think it was non-obvious to a person of ordinary skill. JP Morgan's "invention" was not just obvious, it had been implemented in practice. At least some mobile applications already followed the basic system claimed by the '468 patent. In early 2012, after Apple was criticized for allowing apps to access contact data on the iPhone, some apps began requesting user permission before accessing that data. Similarly, Twitter asked for user permission as early as 2011, including on "feature phones", before allowing other apps access to its data. Since it didn't consider any real world software, the Patent Office missed these examples. The Patent Office does a terrible job reviewing software patent applications. Meanwhile, some in the patent lobby are pushing to make it even easier to get broad and abstract software patents. We need real reform that reduces the flood of bad software patents that fuels patent trolling. Reposted from EFF's Stupid Patent of the Month series. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
America's largest sheriff's department is rolling towards an accountability train wreck. Despite years of discussing the issue, the Los Angeles County Sheriff's Department still has no cohesive policy on body cameras, nor has it taken steps to outfit its officers with the devices. This less-than-ideal situation is being made worse by deputies purchasing their own body cameras with personal funds. An estimated 20 percent of Los Angeles County's 10,000 deputies have bought cameras for themselves, according to the county's inspector general. Sheriff Jim McDonnell concedes some deputies have their own cameras but disputes that as many as 2,000 wear them on duty. Whatever the number, not a single frame of any video from these cameras has ever made it into the public domain. And therein lies the problem. Body cameras owned by law enforcement officers serve zero public purpose. Any recordings remain the personal property of the officers, who can delete and edit footage as they see fit. The only footage likely to make its way into the hands of the sheriff's department are recordings clearing officers of wrongdoing. While it may be possible to subpoena this footage for civil suits and criminal prosecutions, there's no guarantee the footage will arrive unaltered, or even arrive at all. Personal body cams are unlikely to be bundled with unlimited storage. Footage will be overwritten often (depending on how heavily the camera is used while on duty) and remains in the control of officers, rather than the department and its oversight. As is pointed out in the AP article, the use of privately-owned body cameras contradicts DOJ guidance on the matter. A 2014 DOJ report noted private cameras on public employees is an all-around bad idea. "Because the agency would not own the recorded data, there would be little or no protection against the officer tampering with the videos or releasing them to the public or online," the report said. "Agencies should not permit personnel to use privately owned body-worn cameras while on duty." The LA sheriff's department makes this worse by allowing the practice to continue without official policies on body camera use. Even the barest minimum of discipline for deleting footage is impossible, as the department is powerless to take action against deputies who vanish away footage containing alleged misconduct. The head of the local law enforcement union pretty much says the only people benefiting from personal body cameras are the officers that own them. "It's really a personal preference," [union president Ron] Hernandez said. "The guys we have spoken to have said they thought it would be beneficial for them. They see the value in covering themselves." Sorry, but that's not what body cameras are for. They may provide evidence clearing officers of misconduct, but body cameras aren't there to create law enforcement highlight reels. While it's great some officers may find the cameras useful for clearing themselves of charges, they are public employees, not private entities engaging in personal enforcement of laws. The footage should be as public as their positions. But this will never happen if their employer is unwilling to craft a solid body cam policy that addresses private ownership of cameras. As it stands now, the department is allowing its existing policies on evidence handling to act as a stand-in for its non-existent body camera policy. According to these rules, all evidence must be held for two years and turned over on request to the sheriff's department. Supposedly, this will encompass privately-held body camera footage. But it would be much better for body cam evidence to be stored on site where it's immediately accessible and less prone to tampering. Body cameras are already problematic. They have the potential to be great tools of accountability, but this has been continually stunted by legislators and law enforcement agencies, many of which have done all they can to keep this footage out of the public's hands. In this case, the LASD's lack of forward momentum on the camera front has turned a portion of its workforce into sole proprietors with badges, guns, and a collection of home movies starring residents of L.A. County. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
We've noted time and time again how numerous websites have been killing news comments because they're too lazy and too cheap to cultivate an on-site community, and/or don't like having story errors pointed out in quite such a transparent, conspicuous location. Of course editors and publishers can never admit this is their real motivation, instead offering a rotating crop of disingenuous prattle about how they're muzzling their readers and shoving them over to Facebook because they're just so very into building relationships and are breathlessly-dedicated to improving conversation. This week Al Jazeera joined the hottest trend in media, penning a missive over at Medium about how they're banning public news comments as part of their quest to... wait for it... give a voice to the voiceless: The mission of Al Jazeera is to give a voice to the voiceless, and healthy discussion is an active part of this. When we first opened up comments on our website, we hoped that it would serve as a forum for thoughtful and intelligent debate that would allow our global audience to engage with each other. However, the comments section was hijacked by users hiding behind pseudonyms spewing vitriol, bigotry, racism and sectarianism. The possibility of having any form of debate was virtually non-existent. Except that's simply not true. Numerous websites, including this one, have shown repeatedly it's possible to discuss complicated, divisive subjects without the metaphorical house burning down. Yes, it's true that when you don't moderate, show up, or give much of a damn about your comment system, it's quick to devolve into a cesspool of trolls and nincompoops. But the reality is that websites can't monetize quality discourse during budget meetings, so it's easier to just outsource all conversation to the homogeneous blather zone of Facebook, where listening to what your own customers are saying becomes somebody else's problem: `Over time, we found social media to be the preferred platform for our audience to debate the issues that matter the most to them. We encourage our audience to continue to interact with us this way. We realise that this move will come as a disappointment to the members of our audience who did try and engage in thoughtful debate on our site. However, we will be working hard over the coming months to figure out how best to bring back debate to aljazeera.com. To continue the debate on social media, please share your thoughts with us on our Facebook page and get in touch via Twitter. Again, does anything really give a "voice to the voiceless," foster quality conversation, or cultivate relationships quite like muzzling your customers, then shoving them toward a massive social media site where their thoughts, insights and contributions will get lost in a tsunami of prattle? It's clear that countless publishers really love the idea of reverting back to the era of "letters to the editor," when public feedback to your reporting could be carefully censored and repackaged as a genuine dialogue with your readership. But this line of thinking is a disservice to the quickly-evolving conversation the news has become. I keep waiting for a news website to ban comments then candidly admit it was because they just didn't give much of a damn. Until then, the best we'll get are missives about how the best way to bring a voice to the voiceless is apparently with a good, swift kick in the ass. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
For those of us that advocate principals of free speech, the most hallowed battleground for that fight must necessarily be in schools. If these ideals are to win the day on the longer timeline, it will be because subsequent generations take up the banner of free speech and conversation in more numbers than do their opponents. In the West, these fights amount to issues that are indeed important, but pale in comparison to what occurs elsewhere in the world. To that end, it's as important to see how far we've come as it is to understand how far we have to go. Take Pakistan, for instance. Most of us will know that Pakistan has not taken the same trajectory in terms of speech compared to America. Differences of this sort are to be expected, but they can reveal themselves in stark ways. For instance, a local school in Pakistan with a tradition of singing John Lennon's famous song Imagine has this year decided to remove the song from the annual concert for reasons that you've likely already guessed. Pupils at the Karachi Grammar School (KGS), a liberally-inclined private institution with 2,400 places, were on Friday night due to sing the anthem at an in-house concert, upholding a tradition that stretches back decades. But administrators decided it would no longer be safe after a popular conservative journalist highlighted ‘controversial lyrics’ in the song, hinting that they might fall foul of Pakistan’s strict blasphemy laws. What happened here is actually pretty simple. Ansar Abbasi, the conservative journalist mentioned above, picked up this story as if it were new and scandalous and blasted out a call to his Twitter followers to demand Lennon's iconic song be banned from the concert. Because the song rather famously, or infamously depending on your perspective, asks listeners to imagine a world without religions over which to fight, Abbasi suggested that the song was pimping Atheism. To be clear, the song doesn't actually do that, and Lennon himself said the whole point was to imagine all the fighting that could be avoided if religions didn't compete with one another. Distinctions like that, however, aren't fertile ground for outrage-trolling. When other conservative media outlets in Pakistan picked up the story and decided to call out the school and its administrators by name, the school was essentially left with no choice but to bow to the mobbish minority for security concerns. The school, which is heavily-guarded, subsequently dropped the song from its concert. Former student Daanika Kamal told the Telegraph that Mr Abbasi was ignoring the message of ‘Imagine’, which invites listeners to picture a “brotherhood of man”, and “inciting hate”. “We were introduced to [‘Imagine’] by the school” she said, “it was always a song of peace, that’s why it resonated with us. When you live in a country like Pakistan and are constantly hearing about attacks it is really soothing to hear a song that unites us.” It should be obvious how silly and damaging this sort of thing is. When a country's speech laws are so backwards so as to allow mainstream journalists to call for government intervention to keep school-aged children from singing one of the most benign songs in musical history, it should be clear that something has gone awry. When those same calls can get school administrators to bend the knee to the vocal minority even before the government gets involved, the problem is even worse. I could spend calories and time trying to figure out exactly what people like Abbasi think school children should be learning in the classroom under the premise that Imagine is a danger, but fortunately he has made his views on that public so I don't have to. Mr Abbasi yesterday tweeted that “we need to teach the Quran to check both forms of extremism - religious or liberal”. It shouldn't take much mental effort to see just how bad a plan for curriculum that obviously is. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
We've discussed many times about how unfortunately toothless section 512(f) of the DMCA is in practice. That's the section that can supposedly be used against "misrepresentations" under the DMCA. But, in practice, nearly all attempts to use DMCA 512(f) have failed. That's why it's always so interesting to see one that is succeeding. But as law professor Eric Goldman notes, there's a case where a 512(f) claim has survived a motion to dismiss. The background to the case is a bit involved, but apparently someone named Shirley Johnson was posting YouTube videos criticizing "New Destiny Christian Center" and the "Paula White Ministries." Paula White Ministries claimed copyright infringement to YouTube and Johnson counternoticed. Paula White Ministries then sued, claiming copyright infringement over Johnson's use of images and videos in her criticism. The case was dismissed, but the judge suggested that Johnson file a lawsuit against the plaintiff for "malicious prosecution." She did so, though included in that suit was also a claim about "false copyright infringement complaints." The court dismissed those claims, noting that those are not part of a malicious prosecution claim, so a separate lawsuit was filed claiming 512(f) violations. The defendants in this case made a motion to dismiss, but the big news here is that the 512(f) claim lives on. Here, Johnson has presented facts sufficient for the Court to draw the reasonable inference that Defendants knowingly misrepresented copyright infringement to YouTube. Specifically, the verified Complaint avers that: (1) on multiple occasions, PWM/New Destiny “willfully, knowingly[,] and materially” made § 512(f) misrepresentations to YouTube that Johnson’s videos were infringing PWM’s copyrights... (2) “PWM did not hold a valid copyright registration or certificate to the content contained in [Johnson’s] videos at the time of the misrepresentations” ... and (3) the material posted on Johnson’s YouTube channel “was used lawfully in accordance with 17 U.S.C. § 107 of the Copyright Act”—the fair use doctrine.... These allegations suffice to support a § 512(f) claim. See Curtis, 45 F. Supp. 3d at 1199 (finding that a § 512(f) claim was adequately pleaded where plaintiff “repeatedly alleged that [d]efendants knew that the takedown notices contained false infringement allegations”); see also Lenz v. Universal Music Corp., 572 F. Supp. 2d 1150, 1154–55 (N.D. Cal. 2008) (“An allegation that a copyright owner acted in bad faith by issuing a takedown notice without proper consideration of the fair use doctrine . . . is sufficient to state a misrepresentation claim pursuant to Section 512(f) of the DMCA.”). The argument is slightly complicated by the fact that it appears that Johnson (bizarrely) failed to argue fair use in her complaint, and the court notes that this would have made her 512(f) argument even stronger, but cannot be used here. The defendants try to make a few claims to block this, including no actual injury, but the court doesn't buy it: Injury is a critical element of a § 512(f) claim.... As such, Johnson must allege that the purported misrepresentations proximately caused her damages.... In the Malicious Prosecution Action, the Court found that Johnson failed to state a § 512(f) claim because “each factual allegation related to Johnson's damages stem[med] from the prosecution of the Copyright Action rather than the removal of her videos from YouTube.” ... Here, Johnson again asserts damages stemming from prosecution of the Copyright Action in her Complaint, but she also cites damages resulting from the termination of her YouTube channel.... Thus, Johnson has sufficiently pled the existence of an injury caused by the misrepresentations. Johnson also puts a First Amendment claim into this filing, which the court rejects for a variety of reasons. But the key thing here is that a 512(f) claim has actually survived so far. There's still a long way to go, of course, and Professor Goldman notes "long odds" on it being successful in the end. Still, it's always good to see 512(f) get at least some recognition from the courts. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
Get a little more creative with your smartphone photography with the super portable KOBRA Universal 2-in-1 Fish-Eye Lens Kit that attaches easily to iPhone or Android phones. Constructed of sturdy aluminum, this kit includes a fish-eye, wide-angle, and a super-macro lens, adding the kind of versatility to your phone's camera that you'll appreciate when you're feeling extra creative and all for just $8. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
Over the last few weeks, we've written a number of times about how systematically bad internet platforms are at determining how to deal with abuse online. This is not to mock the platforms, even though many of the decisions are comically bad, but to note that this is inevitable in dealing with the scale of these platforms -- and to remind people that this is why it's dangerous to demand that these companies be legally liable for policing speech on their platforms. It won't end well. Just a few weeks ago, we wrote about how Twitter suspended Ken "Popehat" White for posting an email threat he'd received (Twitter argued he was violating the privacy of the guy threatening him). From there, we wrote about a bunch of stories of Facebook and Twitter punishing people for documenting abuse that they had received. But this latest story is even slightly crazier, as it appears that abusers were taking advantage of this on purpose. In this case, the story involves Russian Twitter bots. First, the Atlantic Council wrote about Russian Twitter trolls trying to shape a narrative after the Nazi event in Charlottesville. In response, those very same Twitter bots and trolls started bombarding the Twitter feeds of the researchers. And here's where the story gets even weirder. When Joseph Cox, writing for The Daily Beast, wrote about this (at the link above), those same Twitter bots started targeting him too. And... that caused Twitter to suspend his account. No, really. “Caution: This account is temporarily restricted,” a message on my account read Tuesday. “You’re seeing this warning because there has been some unusual activity from this account,” it continued. Again, it's not hard to see how this happened. Cox's Twitter account suddenly took on a bunch of bot followers, many of whom started retweeting him. From Twitter's perspective, it's easy to see how that looks like someone gaming the system -- possibly buying up fake followers and fake retweets. But, here, it appears to have been done to target the user, rather than to fake boost him. After all, it's completely understandable why Twitter would have a system that would seek out situations where a ton of fake followers were suddenly following someone and retweeting them. That would be a clear pattern indicating spam or something nefarious. And, in designing the system, you might think that such a thing would never be used to harm someone -- but by building in the mechanism to recognize this is happening and to suspend the account, you're now creating a weapon that will be gamed. Cox eventually got his account back and got an apology ("for the inconvenience") from Twitter. But, once again, for everyone out there demanding that these platforms be more forceful in removing users, or (worse) arguing that there should be legal liability on them if they fail to kick off people expeditiously, be careful what you wish for. You may get it... and not like the results. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
So we've noted time and time again how the vast majority of consumers support net neutrality, and the current rules on the books protecting it. Survey after survey (including several from the telecom industry itself) have found net neutrality has broad, bipartisan support. To try and undermine this reality, ISPs have spent more than a decade trying to frame the desire for a healthy, competitive internet -- free of entrenched gatekeeper control -- as a partisan debate. And they've largely been successful at it, sowing division and derailing discourse on a subject that, in reality, isn't all that controversial in the eyes of the Comcast-loathing public. This was highlighted again this week, when a broadband industry-funded study found that 98.5% of the original comments filed with the FCC oppose the agency's plan to kill net neutrality. Of the original, unique comments filed with the FCC (people that took the time to write out their thoughts instead of just signing a form letter), 1.52 million said they opposed the FCC's plan, compared with the 23,000 individuals that think gutting consumer protections was a nifty idea. Again, there's no debate here: the public (which the FCC is supposed to represent) viciously opposes this plan to dismantle Title II, and by proxy, the net neutrality rules. Large ISPs like Comcast, Verizon and AT&T have used every trick in the book to try and distort this reality, from publishing videos claiming that nobody's trying to kill net neutrality, to actively trying to con their own users into supporting gutting the essential protections. Shortly after this week's latest study was published, AT&T got right to work blatantly lying about what the study said, insisting that most of the "legitimate" comments filed with the FCC support killing net neutrality protections: "While Title II proponents may claim that millions of consumers representing the large majority of commenters support Title II, in fact, most of these comments were not legitimate. And when only legitimate comments are considered, the large majority of commenters oppose Title II regulation of Internet access." Again, that's a blatant lie, and the study AT&T helped fund actually found the exact opposite. But you'll notice a new AT&T tactic here: raising doubts about the integrity of the FCC commenting system to try and downplay genuine public opposition to the FCC's plan. As we've noted several times, someone has been filling the FCC comment system with fraudulent comments, using a bot to fill the proceeding alphabetically with bogus individuals (in some cases deceased). And the FCC has made it abundantly clear it has absolutely no interest in doing anything about it, though these fake comments are easy to single out. Now it's entirely possible that someone is just trolling the entire proceeding, thought it would be fun to stuff the system with millions of fraudulent comments, and the FCC and large ISPs are simply taking advantage. But given recent history, and the shenanigans that have riddled this debate for years, the idea that this is a concerted, coordinated effort to downplay the will of the public can't be ruled out. After all, this is an FCC that was willing to completely manufacture a DDoS attack just to try and downplay public anger, and is being sued for refusing to release details on its meetings with ISPs on this subject. And AT&T's recent history involves getting busted for ripping off taxpayers, tricking its customer base into opposing net neutrality, turning a blind eye to drug dealers running directory assistance scams on AT&T's own customers, and actively making bills more confusing to aid scrammers, so you can determine for yourself whether this type of strategy lies within AT&T's lobbying and policy wheelhouse. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
Back in 2012, we wrote about Onity, the company that makes a huge percentage of the keycard hotel door locks on the market, and how laughably easy it was to hack its locks with roughly $50 of equipment. Surprisingly, Onity responded to the media coverage and complaints from its hotel customers with offers of fixes that ranged from insufficient (a piece of plastic that covered the port used to hack the door locks) to cumbersome (replacing the circuit boards on the locks entirely) and asked many of these customers to pay for these fixes to its broken product. Many of these customers wanted to sue Onity for obvious reasons, but a judge ruled against allowing a class action suit to proceed. That was our last story on the subject. So... what happened? Well, Onity ended up springing for the fixes for some of their larger chain hotel customers, but not all of them. For the rest, it was on each hotel to decide to pay for the fix or not. Many, many of them absolutely did not and did nothing about the Onity locks on their doors, while those that did get the fix involving the plastic port cover quickly found out that the fix wasn't much of a fix at all. To see the fallout from all of that, one need only look at Wired's longform piece on the hellacious crime spree undertaken by one troubled young man, Aaron Cashatt, who managed to steal hundreds of thousands of dollars worth of stuff from hotel rooms using the afore-mentioned $50 worth of gear. The entire post is worth your time, with its fascinating look into Cashatt's background, the revelations of the Onity lock's failures, and where those two stories converged. One of the key points in all of this was that even before Cashatt started his crime spree, everyone, from Onity to the hotel chains to any member of the public that cared to know, was aware of how laughably insecure Onity's locks were, except that, for the most part, nobody bothered to do anything about it. Instead of Brocious' research protecting millions of hotel rooms from larceny-minded hackers, it served up a rare, wide-open opportunity to criminals. Soon other hacker hobbyists were posting YouTube videos of themselves demonstrating the vulnerability on real hotel doors, refining Brocious' gadget to work far more reliably. One security researcher in Chicago managed to miniaturize the components of the lock-hacking device until it fit inside the body of a dry-erase marker, with its plug hidden under the marker's cap. The attack became so notorious that it even made a brief cameo in the first season of USA Network's show Mr. Robot. But out of everyone who learned about the Onity keycard hack, only one person, perhaps, had the right mix of desperation, tech savvy, and moral flexibility to use it to its full criminal potential: Aaron Cashatt. Cashatt saw a news segment about the Onity flaw and began to use his own hacking device to exploit it almost immediately. With equipment that cost less than a AAA video game, Cashatt began hacking into hotels, starting at a Marriott. While perfecting his hacking tool and managing to hide it in a sunglasses case that he kept slung around his neck, he worked a waiter job during the day and smoked meth and broke into hotel rooms at night. Using the tool, Cashatt would walk out of hotel rooms with everything the visitor owned and much of what was owned by the hotels as well, including not just towels and toiletries, but flat-screen televisions as well. After deciding to skip a court hearing, he took his show on the road, leaving his corner of Arizona and trekking to the Midwest, where the spree continued. Even when he was arrested on completely unrelated drug charges, police had no idea that the string of hotel room robberies in progress across the country was his doing. When he was carted back to Arizona and let out on bail, he went right back to work. Now with no job to hold him back, Cashatt, his friends, and an on-and-off girlfriend spent the next four months hitting hotels at a frenzied pace, sometimes as many as four in a day...working his way methodically across central Arizona. It was a month into that run that Onity began rolling out the plastic port-blocker fix to its locks. Onity had finally begun distributing this fix for free to at least some of its hotel customers. But this barely slowed Cashatt down. Instead, he used a screwdriver to open the panel of the door lock and was able to access the port once more, the plastic blocker circumvented. With enough practice, he was able to do this in under half a minute. He went right back to work, fencing stolen goods through a network of friends and a jewelry store whose owner he trusted. It was only after one of his friends got pinched that the police managed to get wind of just how big Cashatt's operation had become. He once more hit the road and began breaking into hotels in Tennessee before trekking back west to California and hitting hotels there. It was there that the feds finally caught him, after he managed to steal an estimated half-a-million dollars worth of goods. Now in prison, Cashatt doesn't think much has changed. "I guarantee you that if you tried this at some hotel in the Midwest, it would still work 19 out of 20 times," he says. For that, he blames Onity's negligence. "They just don't get it." For its part, Onity remains opaque on how many fixes have been rolled out to how many hotel door locks, as well as exactly what form those fixes take, either the plastic port-blocker variety or an actual circuit board replacement. The fact that the company isn't screaming about how many circuit board replacements its doled out should tell you all you need to know about the answer to that question. The Wired author himself tested it out and managed to get his own hacking tool to unlock a hotel door on his fourth try. This isn't hard data of any kind, but with Onity itself ducking any kind of transparency, it's the best that can be done. What should stick out most to everyone about this story is how the flaws in Onity's locks were uncovered only through the help of security researchers, oft maligned, whose work then went largely ignored. That willful ignorance allowed someone like Cashatt to go bananas on the hotel industry, all because Onity couldn't be bothered to fix its flawed product. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
As multiple Techdirt stories attest, farmers do love their "ag-gag" laws, which effectively make it illegal for activists to expose animal abuse in agricultural establishments -- although, strangely, farmers don't phrase it quite like that. Big Ag -- the giant seed and agricultural chemical companies such as Monsanto, Bayer, and DuPont -- seem to have decided they want something similar for seeds. As an article in Mother Jones, originally published by Food and Environment Reporting Network, reports, it looks like they are getting it: With little notice, more than two dozen state legislatures have passed "seed-preemption laws" designed to block counties and cities from adopting their own rules on the use of seeds, including bans on GMOs. Opponents say that there's nothing more fundamental than a seed, and that now, in many parts of the country, decisions about what can be grown have been taken out of local control and put solely in the hands of the state. Supporters of the move claim that a system of local seed rules would be complicated to navigate. That's a fair point, but it's hard to believe Big Ag really cares about farmers that much. Some of the new laws go well beyond seeds: Language in the Texas version of the bill preempts not only local laws that affect seeds but also local laws that deal with "cultivating plants grown from seed.” In theory, that could extend to almost anything: what kinds of manure or fertilizer can be used, or whether a county can limit irrigation during a drought, says Judith McGeary, executive director of the Farm and Ranch Freedom Alliance. Along with other activists, her organization was able to force an amendment to the Texas bill guaranteeing the right to impose local water restrictions. Still, the law's wording remains uncomfortably open to interpretation, she says. You would have thought that farmers would welcome the ability to shape local agricultural laws according to local needs and local factors like weather, water and soil. But apparently ag-gagging activists to stop them doing the same is much more important. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
Earlier this year, real estate litigator and aggrieved homeowner Barbara Andersen sued Zillow for providing a lower "Zestimate" than she believed her house was worth. She alleged Zillow violated Illinois state law by portraying its estimates as appraisals, even though it lacked the proper licensing to perform appraisals. Andersen sought an injunction blocking Zillow from posting information about her home (even publicly-available information) and offering a "Zestimate" on its selling price. Andersen has just had her case tossed, although she's now representing others in a proposed class action against Zillow. At some point between February and earlier this week, Andersen's case was moved to a federal court and she's now listed on the bottom of court documents (as counsel of record), rather than up top as a plaintiff. The new lead plaintiffs are three Patels disputing Zestimates of their multi-million dollar properties. (This rearranging of plaintiffs and lawyers [and lawyers who were also plaintiffs] is unsettling, especially for those of us who learned what we know of the real estate business via repeated viewings of "Glengarry Glen Ross.") The Patels (and "others similarly situated") aren't happy with Zillow. The Patels (collectively) have multiple properties on the market, all listed at prices considerably higher than Zillow's Zestimates. They claim, as Andersen did, that Zillow violates state law by offering something homebuyers might believe is an appraisal. A variety of interconnected laws results in the Patels attempting to coax a federal court into killing Zillow's estimates. As Eric Goldman summarizes, the Patels have gone down on strikes. An Illinois putative class action was brought against Zillow over the zestimate on three grounds: (1) the zestimate was an unlicensed appraisal, (2) the house profile and zestimate constituted an intrusion into seclusion, and (3) Zillow’s practices violate state consumer protection laws. Zillow wins on a 12(b)(6) motion to dismiss. Unlicensed Appraisal: The applicable licensure statute expressly excludes “the procurement of an automated valuation model.” Furthermore, the law doesn’t support private causes of action. Privacy Invasion. There’s no intrusion when the zestimate is based on public data sources. The plaintiffs also don’t explain how the intrusion is “offensive” or plead the required “anguish and suffering.” Consumer Protection Laws. The court says the zestimates are not false, misleading or confusing Goldman also points out no serious person is likely to confuse a Zestimate with an appraisal… at least not if they expect to be taken seriously. Courts in cases dug up by Goldman have called Zillow Zestimates everything from "inherently unreliable" to "incapable of accurate" valuations. One judge concluded "internet searches are insufficient evidence of property value," spreading the besmirchment to Zillow's competitors and pre-trial Googlers. Zillow pled a First Amendment defense for its publication of lousy Zestimates and other public data. The court [PDF] doesn't make any attempt to address this pleading as it finds plenty it doesn't like about the state law claims. Zillow argues that the First Amendment requires dismissal of all of Plaintiffs’ claims. (R. 18, Mem. Supp. Mot. Dismiss, 3.) Additionally, Zillow contends that First Amendment concerns aside, Plaintiffs fail to plead the required elements of their claims. (Id. at 9.) While Zillow makes persuasive arguments with respect to the First Amendment, the Court need not and should not rule on them conclusively because Plaintiffs’ claims fail under Illinois statutory law. As the court points out, Zestimates are nothing more than "nonactionable statements of opinion" -- statements that result from no intrusion in personal privacy (because publicly-available info is used) nor violation of real estate regulations in Illinois. All claims have been dismissed without prejudice, meaning real estate litigator (and litigant) Barbara Andersen is welcome to try again. But she -- like the Patels she now represents -- will need to find a better angle than alleged state law violations to take another run at estimates they all subjectively feel are on the low end. Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
We've well established that the internet of things (IOT) market is a large, stinky dumpster fire when it comes to privacy and security. But the same problems that plague your easily hacked thermostat or e-mail password leaking refrigerator take on a decidedly darker tone when we're talking about your health. The health industry's outdated IT systems are a major reason for a startling rise in ransomware attacks at many hospitals, but this same level of security and privacy apathy also extends to medical and surgical equipment -- and integral medical implants like pacemakers. After a decade of warnings about dubious pacemaker security, researchers at Medsec earlier this year discovered that a line of pacemakers manufactured by St. Jude Medical were vulnerable to attacks that could kill the owner. The researchers claimed that St. Jude had a history of doing the bare minimum to secure their products, and did little to nothing in response to previous warnings about device security. St. Jude Medical's first response was an outright denial, followed by a lawsuit against MedSec for "trying to frighten patients and caregivers." Ultimately, the FDA was forced to issue its first ever warning about the security of a pacemaker earlier this year, though the agency somewhat downplayed the potentially fatal ramifications: "The FDA has reviewed information concerning potential cybersecurity vulnerabilities associated with St. Jude Medical's [email protected] Transmitter and has confirmed that these vulnerabilities, if exploited, could allow an unauthorized user, i.e., someone other than the patient's physician, to remotely access a patient's RF-enabled implanted cardiac device by altering the [email protected] Transmitter. The altered [email protected] Transmitter could then be used to modify programming commands to the implanted device, which could result in rapid battery depletion and/or administration of inappropriate pacing or shocks." Inappropriate, indeed. St. Jude Medical has since been acquired by Abbott Laboratories, and back in April the FDA sent a warning to Abbott that it needed to design a comprehensive plan to fix the flaw (first revealed in August of last year) within fifteen days. That was followed up with a formal, voluntary recall notice issued by the FDA regarding the impacted pacemaker, believed to be the first such warning of its kind. In its warning, the FDA urged the estimated 400,000 owners of this pacemaker model to schedule a physician appointment for a firmware update, lest they find themselves quite literally hacked. The FDA's alert was also joined by a warning by the Department of Homeland Security outlining the problem as such: "The pacemaker’s authentication algorithm, which involves an authentication key and time stamp, can be compromised or bypassed, which may allow a nearby attacker to issue unauthorized commands to the pacemaker via RF communications....The pacemakers do not restrict or limit the number of correctly formatted “RF wake-up” commands that can be received, which may allow a nearby attacker to repeatedly send commands to reduce pacemaker battery life." Comforting. Many security experts have been quick to point out that this may be the turning point at which companies finally begin taking these sorts of problems more seriously. But the lengths it took to bring us to this point are downright comical, involving MedSec going so far as to at one point short St. Jude stock to bring necessary attention to the problem. Hopefully, the entire saga is a shot over the bow that other security-apathetic medical impact manufacturers will wisely heed. Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
We've talked a lot over the years about the importance of standing up to patent trolls. Newegg, famously, has its "Never Settle" mantra for dealing with patent trolls. And we covered the case of Fark's Drew Curtis, a few years back, who simply refused to give in when a patent troll tried to shake him down. Part of that standing firm was that when he eventually "settled" the case, he demanded that he be allowed to reveal that the settlement was for $0 (usually trolls require a gag clause on settlements to avoid anyone finding out what happened). But it appears Kaspersky Labs has taken this up a notch. Two years ago, we wrote about the patent troll with the somewhat on-the-nose name of Wetro Lan (get it? "we trollin'") that was threatening lots of companies. One company it went after was Kaspersky Labs, which it eventually sued in East Texas (naturally). Things didn't quite go according to Wetro Lan's plan. As Joe Mullin at Ars Technica explains, by the end of the case, Wetro Lan had to pay Kaspersky to get the company to agree to let the case die. During discovery, Kaspersky's lawyer was able to discover the settlements that Wetro Lan actually got out of other companies, while also making it quite clear to Wetro Lan, that it's claims in this suit were completely bogus. So then it flipped the script: "Their patent was for a firewall that's not user-configurable," Kniser said in an interview with Ars. "They knew ours was configurable. So they started taking weird positions, basically saying, 'Well, you can only configure it a little bit.' I think that would have gotten them in trouble as far as [patent] validity goes." Wetro Lan's settlement demands kept dropping, down from its initial "amicable" demand of $60,000. Eventually, the demands reached $10,000—an amount that's extremely low in the world of patent litigation. Kniser tried to explain that it didn't matter how far the company dropped the demand. "Kaspersky won't pay these people even if it's a nickel," he said. Then Kniser took a new tack. "We said, actually, $10,000 is fine," said Kniser. "Why don't you pay us $10,000?" After some back-and-forth, Wetro Lan's lawyer agreed to pay Kaspersky $5,000 to end the litigation. Papers were filed Monday, and both sides have dropped their claims. Eugene Kaspersky, in his own blog post on the case, explains the importance of fighting patent trolls, and how this went down, noting that it was pretty clear that Wetro Lan's lawyers knew "precious little" about what they were actually suing over and also that they "appear to have an incomplete knowledge of IP law." Recognizing that once Kaspersky had the upper hand and knew it would win, any time wasted in court was lost money to Wetro Lan -- hence the offer to "settle" the case only if Wetro Lan paid them. Wetro immediately offered a $2,000 price instead, before they settled on $5,000. And again, fighting patent trolls and being able to talk about the details of the victory are key: Another notch on our bridge – representing the number of victories against patent extortionists. The score now: 5:0. That’s not including the 23 out-of-court settlements (crucially: in which we paid zero dollars), nor the untold numbers of ‘just try it!’ letters we’ve sent back to trolls who then promptly crawl back under their bridges. Score: 5:0; total sum paid to patent trolls: $0. These are very unusual results in the USA. Accordingly, they’re results that tend to keep the shrewder trolls off our back. The less shrewd, however, still keep coming, not even taking the time to find out about our successful anti-troll reputation – or even just our basic anti-troll slogan: ‘We fight them to the last bullet – their last bullet’. Perhaps if they did they too would stay away. But they don’t. So they get stung. Kudos. Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
Techdirt has been covering the slow and painful attempts by the EU to make its copyright laws fit for the digital age for nearly four years now. Along the way, there have been some good ideas, and an astonishingly bad one that would require many online services to filter all uploads to their sites for potential copyright infringements. Despite the widespread condemnation of what is now Article 13 in the proposed Copyright Directive, an important new leak (pdf) published on the Statewatch site shows that EU politicians are still pushing to make the censorship filters mandatory. The document is an attempt by Estonia, which currently holds the Presidency of the Council of the EU -- one of the three main European Union bodies -- to come up with a revised text for the new Copyright Directive. In theory, it should be a compromise document that takes into account the differing opinions and views expressed so far. In practice, it is a slap in the face for the EU public, whose concerns it ignores, while pandering to the demands of the EU copyright industry. Estonia's problem is that the whole idea of forcing Web sites to filter uploads contradicts an existing EU directive, one from 2000 on e-commerce. This created a safe harbor for sites that were "mere conduits" or simply hosting material -- that is, took no active part in publishing material online. The Directive explicitly says: Member States shall not impose a general obligation on providers, when providing the services covered by Articles 12, 13 and 14, to monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity. Most of the leaked document is a forlorn attempt to circumvent this unequivocal ban on upload filters: In order to ensure that rightholders can exercise their rights, they should be able to prevent the availability of their content on such [online] services, in particular when the services give access to a significant amount of copyright protected content and thereby compete on the online content services' market. It is therefore necessary to provide that information society service providers that store and give access to a significant amount of works or other subject-matter uploaded by their users take appropriate and proportionate measures to ensure the protection of copyright protected content, such as implementing effective technologies. It is reasonable to expect that this obligation also applies when information society service providers are eligible for the limited liability regime provided for in Article 14 of Directive 2000/31/EC [for hosting], due to their role in giving access to copyright protected content. The obligation of measures should apply to service providers established in the Union but also to service providers established in third countries, which offer their services to users in the Union. In other words, even though Article 14 of the E-commerce Directive provides a safe harbor for companies hosting content uploaded by users, the EU wants to ignore that and make online services responsible anyway, and to require upload filtering, even though that is forbidden by Article 15. Moreover, this would apply to non-EU companies -- like Google and Facebook -- as well. The desperation of the Estonian Presidency is evident in the fact that it provides not one, but two versions of its proposal, with the second one piling on even more specious reasons why the E-commerce Directive should be ignored, even though it has successfully provided the foundation of Internet business activity in the EU for the last 17 years. Jettisoning that key protection will make it far less likely that startups will choose the EU for their base. The requirement to filter every single upload for potential infringements will probably be impossible, and certainly prohibitively expensive, while the legal risks of not filtering will be too great. So the Estonian Presidency is essentially proposing the death of online innovation in the EU -- rather ironic for a country that prides itself for being in the digital vanguard. The leaked document also contains two proposals for Article 11 of the Copyright Directive -- the infamous link tax. One takes all the previous bad ideas for this "ancillary copyright", and makes them even worse. For example, the new monopoly right would apply not just to text publications in any media -- including paper -- but also to photos and videos. In addition, it would make hyperlinks subject to this new publisher's "right". The only exceptions would be those links not involving what is termed a "communication to the public" -- a concept that is so vague that even the EU's top courts can't agree what it means. The other proposal completely jettisons the idea of any kind of link tax, and instead wants to introduce "a presumption for publishers of press publications": in the absence of proof to the contrary, the publisher of a press publication shall be regarded as the person entitled to conclude licences and to seek application of the measures, procedures and remedies … concerning the digital use of the works and other subject-matter incorporated in such a press publication, provided that the name of the publisher appears on the publication. This is something that has been suggested by others as providing the best solution to what publishers claim is a problem: the fact that they can't always sue sites for alleged copyright infringement of material they have published, because their standing is not clear. It effectively clarifies that existing copyright law can be used to tackle abusive re-posting of material. As such, it's a reasonable solution, unlike the link tax, which isn't. The fact that two such diametrically-opposed ideas are offered in a document that is meant to be creating a clear and coherent way forward is an indication of what a mess the whole EU Copyright Directive project remains, even at this late stage. Unfortunately, the Estonian Presidency's unequivocally awful drafts for Article 13 suggest that the EU is still planning to bring in a law that will be bad for the Internet, bad for innovation, and bad for EU citizens. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
With the fully immersive Complete Web Developer Course that covers everything 'code', you'll learn everything you need to start programming like a pro. Over 30 hours of content will introduce you to the fundamentals of HTML5, CSS3 and Python, teach you how to build responsive websites with jQuery, PHP 7, MySQL 5 and Twitter Bootstrap, discover smart ways to add dynamic content by using APIs, and more. The course is on sale for only $19. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
Earlier this week, we wrote about Donald Trump and Jeff Sessions bringing back the Defense Department's 1033 program, which helped militarize local police forces with surplus military equipment. We've been covering all sorts of problems with the 1033 program over the years, and people like Radley Balko have written entire books on the problem. And the previous ban on the 1033 only put a fairly narrow limit on the practice of militarizing police -- but now even those modest limits are gone. What's truly incredible, however, is the complete nonsense being used to justify this. Attorney General Jeff Sessions gave a speech about this on Monday, in which he trotted out his standard misleading and out-of-context stats, falsely claiming that there's some massive new crimewave across the country, when there's really just been a tiny bump after decades of decline in crime rates (the use of percentages by Sessions shows the he likely knows the absolute numbers are so meaningless that he has to mislead with percentages working off a small base). But, even with the usual misleading claims about violence and violence directed towards police, I still never expected him to... point to Houston and the impact of Hurricane Harvey as a reason for increased police militarization. But that's exactly what he did: Those restrictions went too far. We will not put superficial concerns above public safety. All you need to do is turn on a tv right now to see that for Houstonians this isn’t about appearances, its about getting the job done and getting everyone to safety. Wait. Law enforcement in Houston needs surplus military equipment to rescue people? Last I've seen it's been tons of good hearted people using boats of all kinds going around and rescuing people. I don't see much need for military equipment. Once again, this looks like law enforcement using "any means necessary" to justify getting their military surplus toys, despite tremendous evidence of how this process is abused, how it harms community relations and how it leads to civil rights of the public being violated. To point to the disaster in Houston as a reason for restarting the program is not just frivolous, it's dangerous. Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
Let's not mince words: the FCC's plan to gut net neutrality protections in light of severe public opposition is likely one of the more bare-knuckled acts of cronyism in modern technological and political history. That's because the rules have overwhelming, bipartisan support from the vast majority of consumers, most of whom realize the already imperfect rules are some of the only consumer protections standing between consumers and giant, uncompetitive companies like Comcast. Repealing the rules only serves one interest: that of one of the least liked, least-competitive industries in America. That said, the broadband industry and the FCC keep trying to obfuscate this reality, and failing. The latest example: a new study funded by the industry itself took a closer look at the 21.8 million comments filed with the FCC so far on its plan to roll back the rules, and found, once again, the vast majority of the citizens the agency is supposed to represent oppose the FCC's plan. The full study was conducted by consulting firm Emprata and funded by Broadband for America, a lobbying front organization backed by Comcast, AT&T, Verizon, Charter and most large wireless carriers. As we've consistently reported, somebody has been backing an attempt to fill the FCC's comment proceeding with entirely bogus, bot-crafted support for the FCC's plan. There have even been bogus comments filed in support of killing net neutrality made in my name (which the FCC has said they'll do nothing about). The Emprata study found that even including this farmed detritus, the majority of the comments are in favor of retaining the rules. Including spam, bot-posts, and form letters (the latter being used by both sides), the study found 60% were opposed to the FCC's plan. But when the firm only analyzed original comments coming from actual human beings, it found that 98.5% of original comments filed support keeping the rules intact. And while form letters are utilized by both sides of this asymmetrical debate to galvanize public action, the study also found very few original comments in support of Ajit Pai and friends' handout to the telecom sector: "[T]here are considerably more "personalized" comments (appearing only once in the docket) against repeal (1.52 million) versus 23,000 for repeal. Presumably, these comments originated from individuals that took the time to type a personalized comment. Although these comments represent less than 10 percent of the total, this is a notable difference." The overwhelming majority of comments for and against repealing Title II are form letters (pre-generated portions of text) that appear multiple times in the docket. The form letters likely originated from numerous sources organized by groups that were for or against the repeal of Title II. Form letters comprise upwards of 89.8 percent of comments against Title II repeal and upwards of 99.6 percent of the comments for Title II repeal. Again, this supports numerous, previous studies indicating that net neutrality protections have broad, bipartisan support. Other cable industry funded studies have found the same thing. There's no debate here: the FCC is engaged in killing rules solely so it's easier for entrenched duopolists to abuse the lack of competition in the broadband space. And while ISPs and the FCC like to idiotically frame this as restoring freedom or other such nonsense, the public -- after years of abuse by this dysfunctional sector -- doesn't appear to be quite as stupid as the industry and its allies hoped. Meanwhile, the study also zooms in more closely on the scope of the fraudulent comment problem the FCC seems intent on ignoring, claiming that bogus bots are submitting comments to the FCC both in support and opposition to rule repeal. In fact, 7.75 million comments appear to be completely bogus: "More than 7.75 million comments... appear to have been generated by self-described 'temporary' and 'disposable' e-mail domains attributed to FakeMailGenerator.com and with nearly identical language. Virtually all of those comments oppose repealing Title II. Assuming that comments submitted from these e-mail domains are illegitimate, sentiment favors repeal of Title II (61 percent for, 38 percent against)." Who's doing this isn't clear, and the FCC has refused to investigate. Someone that supports net neutrality could have crafted a bot to spam the system with comments opposing the FCC's plan. But it's also possible an industry-linked opponent to net neutrality is trying to pollute the entire comment system to invalidate the entire public forum. That's why former FCC staffers like Gigi Sohn are urging the FCC to do its own analysis of the comments instead of relying on data from the telecom industry: "August 30th could very well mark the official beginning of the end for the Open Internet. With the closing of the public comment period for the FCC’s proceeding to repeal the 2015 Net Neutrality rules, the record is now full of tens of millions of comments, many of them demonstrably fake. Incredibly, it doesn't even matter if the facts are real or alternative because Chairman Pai intends to ignore them all so that he can eliminate the rules and protections for Internet users and innovators as quickly as possible - which also explains why he refuses to make public information that is critical to his FCC's decision making." Any real FCC inquiry is unlikely to happen, and the FCC appears poised to use the bogus comments to justify ignoring public feedback entirely when it votes to finally kill the rules in the next few months. That's when the real fun begins, as all of the agency's efforts to downplay vicious public opposition to its plan (including apparently fabricating a DDoS attack) will be front and center in the inevitable lawsuits to come. Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
A federal court in Oakland, California has come to a conclusion the DOJ definitely didn't want it to reach, as Cyrus Farivar reports for Ars Technica. In the 39-page ruling, US District Judge Phyllis Hamilton notably found that the use of stingray to find a man named Purvis Ellis was a "search" under the Fourth Amendment—and therefore required a warrant. The DOJ -- despite issuing its own guidance requiring warrants for Stingrays in 2015 -- argued in court earlier this year that no warrant was needed to deploy the Stingray to locate a shooting suspect. It actually recommended the court not reach a conclusion on the Fourth Amendment implications of Stingray use, as it had plenty of warrant exceptions at the ready -- mainly the "exigent circumstances" of locating a suspect wanted for a violent crime. Unfortunately for the federal government (and all other law enforcement agencies located in the court's jurisdiction), the court declined the DOJ's offer to look the other way on Constitutional issues. It found a Stingray's impersonation of cell tower to obtain real-time location information is a search under the Fourth Amendment. The court adopts Judge Koh’s reasoning in In re Application for Telephone Information, 119 F. Supp. 3d at 1026, to hold that cell phone users have an expectation of privacy in their cell phone location in real time and that society is prepared to recognize that expectation as reasonable. While Judge Koh limited her analysis to the privacy interest in historical CSLI, the court determines that cell phone users have an even stronger privacy interest in real time location information associated with their cell phones, which act as a close proxy to one’s actual physical location because most cell phone users keep their phones on their person or within reach, as the Supreme Court recognized in Riley. In light of the persuasive authority of Lambis, and the reasoning of my learned colleagues on this court recognizing a privacy interest in historical cell site location information, the court holds that Ellis had a reasonable expectation of privacy in his real-time cell phone location, and that use of the Stingray devices to locate his cell phone amounted to a search requiring a warrant, absent an exception to the warrant requirement. The court also has something to say about the FBI/Oakland PD's use of a pen register order as a stand-in for a warrant specifically detailing the type of device to used to obtain these so-called "phone records." The government contends that since the Stingray devices used in this case were configured in compliance with the pen register statute, then the provisions of the pen register statute, including the “emergency” provisions, govern their operation. Doc. no. 321 at 9 (citing 18 U.S.C. § 3125). The government does not address the key issue in dispute, namely, whether the provisions of the pen register statute and the SCA provide the appropriate standard for using a CSS to locate a cell phone in real-time. The court follows Judge Illston’s determination in Cooper, 2015 WL 881578, that the provisions of the pen register statute and the SCA do not authorize the use of a CSS to disclose realtime information about a cell phone user’s physical location, and that such location monitoring must be authorized by a showing of probable cause. It also points out the DOJ's reliance on the Stored Communications Act to salvage its warrantless Stingray use is misplaced -- something that could be gathered by the name of the statute. [C]ongress intended that the SCA “was to be used as a means to obtain data which has already been stored at the time the government seeks to obtain it,” as opposed to real-time data. Ultimately, though, the court denies the suppression of the evidence, allowing the government's "exigent circumstances" argument to prevail. This may prove to be a good thing in the long run (although it does little for the defendant). Allowing the government to keep its evidence gives it no reason to appeal the decision. And this decision implements a warrant requirement for obtaining real-time cell site location info and gives certain third-party records an expectation of privacy. Permalink | Comments | Email This Story

Read More...
posted 20 days ago on techdirt
Searching for stories about Sega here at Techdirt results in a seriously mixed bag of results. While the company has managed to be on the right side of history on issues like SOPA and fan-made games, it has also managed to be strongly anti-consumer on game mods and has occasionally wreaked havoc on the YouTube community, all in the name of copyright protectionism. Despite all of this, Sega has gone to some lengths to successfully craft for itself a public image more accessible and likeable than its long-time rival Nintendo. Stories like the following will put dents in that image, however. Sega recently ported its new title Sonic Mania to the PC and released it on the Steam platform. The port also included Denuvo DRM and an always-online requirement, except that it (oops!) forgot to tell anyone about either. I tried loading the Windows version of Sonic Mania while my Steam account was offline. That's when Sonic Mania informed me, in no uncertain terms, that "Steam user must be logged in to play this game." Turns out, Sega has applied the much-malignedDenuvo copy-protection system to Sonic Mania's PC version—and this Denuvo implementation won't unlock the game for players so long as Steam is operating in "offline mode." Until the game receives an update, Sonic Mania fans hoping to play the PC version in an offline capacity are out of luck. (Your backup option, should you want to do something like board a plane, is to boot the game while connected to Wi-Fi, then disconnect from the Internet and leave the game running in the background until you're ready to play. It's not necessarily an ideal workaround.) Gamers immediately began complaining both that the DRM was keeping them from playing their legitimately purchased game and that the Steam store page for Sonic Mania was devoid of any notification of Denuvo or its online requirement in the system requirements page, or anywhere else for that matter. Somewhat oddly, a Steam account with the handle of "Sega Dev" responded to the complaints, saying the omission on the store page was a mistake. That mistake has been rectified and the store page now informs buyers of the Denuvo requirement. But that same account also informed Steam users that "Sonic Mania is intended to be played offline", and has promised to investigate the issue. Even stranger, the PR lead for the Sonic franchise went even further and practically begged for the public to complain to the company about Denuvo and the online requirement. In particular, please do share your feedback on DRM or any issues you're having at the link above. Make your voices heard. — Aaron Webber (@RubyEclipse) August 29, 2017 I simply can't recall ever having seen anything remotely like this, with the PR wing of a company soliciting complaints to corporate in what sure seems like a way to get corporate to move off of a DRM. It seems there is some infighting at Sega over this requirement, though to what level that infighting rises is unknown to me. Any Sega employees reading this are free to contact me and relay your concerns. Regardless, this is a terrible look for Sega among the gaming community. Including a much-maligned DRM and requiring a single-player game to be online to play it can only have one sort of impact on the company's standing in the public. While Sega has not removed Denuvo from the game entirely, it has since released a patch that allows the game to be played offline. The damage, however, has likely already been done. Permalink | Comments | Email This Story

Read More...