posted 17 days ago on techdirt
Less than a month after a first report was delivered on Washington, DC police body camera use, a second one has arrived. And it seems to contradict some assertions made in the first report. The first report was put together by an extension of DC's government called the [email protected] It showed body camera use doing almost nothing to curtail use of force by officers. This seemed to undercut the notion body cameras can be a tool of accountability. But they never will be -- not if the agencies using them remain uninterested in punishing officers for misconduct. The [email protected] report stated officers -- more than 2,000 of them -- weren't observed repeatedly or intentionally violating body camera activation policies. Other researchers have suggested that BWCs may fail to affect outcomes because of nonadherence: officers, for a variety of reasons, may not use their assigned cameras according to departmental policy. They may fail to turn on the camera, for example. We have no indication that non-adherence was a widespread problem in this study. For 98% of the days in 2016, MPD averaged at least one video (and often many more) per call for service associated with a treatment officer. Further, even for the 2% of days in 2016 in which the number of videos uploaded was less than the number of incidents for which we would expect them, the difference is minimal, with 96% average adherence based on our measure. The latest report, however, comes to the opposite conclusion. This one [PDF], put together by DC's police oversight board, shows plenty of nonadherence. (via FourthAmendment.com) More than a third of cases investigated by a D.C. police oversight board after complaints were made about officers’ conduct this past year involved officers who did not properly use their body-worn cameras during those incidents, according to a report made public Tuesday. Some officers turned the cameras on too late, others too early, the report from the Office of Police Complaints found. In 13 percent of the cases, at least one officer at a crime scene or incident failed to turn on the camera, though colleagues did. This is causing problems with accountability. Michael G. Tobin, the director of the Office of Police Complaints, says these "failures" sometimes compromise entire internal investigations. But he's also quick to excuse the officers, citing the newness of the technology. The "newness" may contribute to some unintentional failure to follow policy, but it's not as though the department's body camera policy is full of contradictory instructions on activations. MPD General Order SPT-302.13 specifies that “[m]embers, including primary, secondary, and assisting members, shall start their BWC recordings as soon as a call is initiated via radio or communication from OUC [Office of Unified Communications] on their mobile data computer (MDC), or at the beginning of any self-initiated police action.” Cameras should be rolling for pretty much any officer interaction with the public. The problem for DC police oversight -- and the public itself -- is that these activation failures compromise investigations of police misconduct. To investigators inside and outside the department, there's no discernible difference between forgetting to turn on a camera and deliberately leaving a camera off. The small upside is the 2,800 cameras in use, which lowers the chance that all responding officers will fail to produce footage. The camera policy doesn't leave activation to officer discretion. But the hardware does, and that's an issue that needs to be addressed. Surprisingly, it's the local police union that's calling for additional accountability measures. Police Sgt. Matthew Mahl, the chairman of the police union, said he plans to ask the department for new equipment that would automatically turn on body cameras when a gun is removed from the holster. That will help cover cases where deadly force is threatened or deployed. But there are a lot of misconduct and excessive force complaints that will fall through the "gun out, camera on" cracks. The combination of both reports suggests cameras still aren't fixing law enforcement. Too many officers still feel the equipment is optional, even when it's issued at the start of every shift and clipped to their chests for the next 12 hours. If there's no downturn in force deployment, it's because no one's made it clear the absence of footage should be nearly as damning as the existence of damning footage. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
This morning, at about the same time as I published my article criticizing Senator Portman's decision to push forward with SESTA, an amended version of the bill was released, that has only a few small changes. Most notably it appears to improve the "knowledge" standard, which was definitely the worst part of the bill. The original bill had the following standard: The term ‘participation in a venture’ means knowing conduct by an individual or entity, by any means, that assists, supports, or facilitates a violation... The concern here was twofold. "Knowing conduct" means knowing of the conduct, not the outcome. That is, knowing that people can comment, not that those comments "facilitate" a violation of sex trafficking laws. That's way too broad. Separately, the "assists, supports, or facilitates" language is very broad, and includes completely passive actions (facilitates), rather than active participation. The updated manager's amendment fixes... just some of this. It now says: The term 'participation in a venture' means knowingly assisting, supporting, or facilitating a violation... So now it's "knowingly" doing the other things, rather than just "knowing conduct." That's better. But, it's still very broad. And the "facilitation" is still there. Making it "knowingly facilitates" certainly helps, but it's still much broader than before. Because you have now created what is effectively a notice-and-takedown system for all kinds of content. Someone just needs to claim that your site "facilitates" sex trafficking, and now you have "knowledge." Thus, the strong incentive will be to remove, remove, remove. As we already see, the DMCA notice and takedown provisions are widely abused. Anyone who thinks this won't be widely abused has not been paying attention. Furthermore, even "knowingly assisting, supporting, or facilitating" is going to lead to a lot of problems. We know this because we already lived through it with the DMCA. The entire 10 year fight between Viacom and YouTube was, in large part, over the definition of "knowing." Because Viacom wanted it to be a broad standard of "knowing that bad stuff happens on the platform," while YouTube argued (correctly) for an "actual knowledge" meaning, which means that you would have to have knowledge of specific content that violates the law, and then be responsible for removing that specific content. If you just make "knowing" the standard, without the "actual knowledge" part, you're in for lawsuits arguing that general knowledge makes you guilty. And that could impact tons of companies. Take Tinder. The incredibly popular dating app is almost certainly used by some sex traffickers to traffic people against their will. Here's an article from three years ago talking about sex trafficking on Tinder. Boom. Now Tinder has "knowledge" that its platform is "assisting, supporting, or facilitating" sex trafficking. It may now be both civilly and criminally liable. So, you tell me, what should Tinder do to get rid of this liability? I'll wait. And if you think no one will bother to sue Tinder over something like this, need I remind you of the many lawsuits we've been writing about in which people are suing every social media platform because vaguely defined "terrorists" use the platform? At least under the DMCA there's a clear "safe harbor" setup, whereby companies know the conditions under which they need to remove stuff to avoid liability. SESTA has no safe harbor language. It just says knowledge. But then what? We're in for years of litigation before courts determine what the hell this means, and that likely means startups will die. And others will never even have a chance to get off the ground. And, once again, this doesn't solve the other giant concern we had about the original bill, which is that this will encourage platforms to stop helping law enforcement and to stop monitoring their platforms for trafficking, because doing so can constitute "knowledge" and make them liable, if they are unable to wave a magic wand and make all such conduct disappear. In other words, the new bill is still hugely problematic. And that's why it's hugely troublesome that the Internet Association -- the giant lobbying organization representing larger internet companies has now come out in support of the new SESTA: “Internet Association is committed to combating sexual exploitation and sex trafficking online and supports SESTA. Important changes made to SESTA will grant victims the ability to secure the justice they deserve, allow internet platforms to continue their work combating human trafficking, and protect good actors in the ecosystem.” “Internet Association thanks cosponsors Sen. Portman and Sen. Blumenthal for their careful work and bipartisan collaboration on this crucially important topic and Chairman Thune and Ranking Member Nelson for their leadership of the Commerce Committee. We look forward to working with the House and Senate as SESTA moves through the legislative process to ensure that our members are able to continue their work to fight exploitation.” I honestly am flabbergasted at this move by the Internet Association. This will do serious, serious harm to tons of internet companies. Since the Internet Association represents the bigger tech companies, perhaps they stupidly feel that they can handle the resulting mess. But smaller organizations are going to die because of the overreach of this legislation. This is a shameful move in which the Internet Association has sold its soul. I know that many of the big internet companies were under lots of pressure this week from Congress over things like Russian ads, and it almost feels like this is their attempt to appease Congress, since some in Congress have (totally incorrectly) framed the SESTA debate as being one where tech companies were opposing efforts to stop sex trafficking. If you're already fending off charges of helping foreign adversaries undermine elections, perhaps they felt they didn't want to add bogus claims of supporting sex trafficking to the pile. But, this is a bad, bad decision. Yes, the manager's amendment is slightly better, but it's not good. It's bad for the internet. It's bad for free speech. And the fact that the Internet Association has stupidly put its stamp of approval on this is going to make it much more difficult to stop. Already Senators Blumenthal and Portman are pretending that because the Internet Association is on board, it means all of "tech" is on board. This is wrong and it's dangerous. The large members of the Internet Association may be able to survive this mess (though it will be costly) but smaller organizations are going to be harmed. And, in the end, it will do nothing to stop sex trafficking, and could even make the problem worse. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
The news about the DOJ trying to subpoena Twitter calls to mind an another egregious example of the government trying to unmask an anonymous speaker earlier this year. Remember when the federal government tried to compel Twitter to divulge the identity of a user who had been critical of the Trump administration? This incident was troubling enough on its face: there’s no place in a free society for a government to come after a critic of it. But largely overlooked in the worthy outrage over the bald-faced attempt to punish a dissenting voice was the government’s simultaneous attempt to prevent Twitter from telling anyone that the government was demanding this information. Because Twitter refused to comply with that demand, the affected user was able to get counsel and the world was able to know how the government was abusing its authority. As the saying goes, sunlight is the best disinfectant, and by shining a light on the government's abusive behavior it was able to be stopped. That storm may have blown over, but the general issues raised by the incident continue to affect Internet platforms – and by extension their users and their speech. A significant problem we keep having to contend with is not only what happens when the government demands information about users from platforms, but what happens when it then compels the same platforms to keep those demands a secret. These secrecy demands are often called different things and are born from separate statutory mechanisms, but they all boil down to being some form of gag over the platform’s ability to speak, with the same equally troubling implications. We've talked before about how important it is that platforms be able to protect their users' right to speak anonymously. That right is part and parcel of the First Amendment because there are many people who would not be able to speak if they were forced to reveal their identities in order to do so. Public discourse, and the benefit the public gets from it, would then suffer in the absence of their contributions. But it's one thing to say that people have the right to speak anonymously; it's another to make that right meaningful. If civil plaintiffs, or, worse, the government, can too easily force anonymous speakers to be unmasked then the right to speak anonymously will only be illusory. For it to be something speakers can depend on to enable them to speak freely there have to be effective barriers preventing that anonymity from too casually being stripped by unjust demands. One key way to prevent illegitimate unmasking demands is to fight back against them. But no one can fight back against what they are unaware of. Platforms are thus increasingly pushing back against the gags preventing them from disclosing that they have received discovery demands as a way to protect their communities of users. While each type of demand varies in its particulars (for instance a civil subpoena is different from a grand jury subpoena, which is different than an NSL, which is different from the 19 USC Section 1509 summons that was used against Twitter in the quest to discover the Trump critic), as well as the rationale for why the demanding party might have sought to preserve the secrecy around the demand with some sort of gag, all of these unmasking demands still ultimately challenge the durability of an online speaker's right to remain anonymous. Which is why rulings that preserve, or, worse, even strengthen, gag rules are so troubling because they make it all the more difficult, if not outright impossible, to protect legitimate speech from illegitimate unmasking demands. And that matters. Returning to the example about the fishing expedition to unmask a critic, while it's great that in this particular case the government quickly dropped its demand on Twitter, questions remain. Was Twitter the only platform the government went after? Perhaps, but how would we know? How would we know if this was the only speech it had chosen to investigate, or the 1509 summons the only unmasking instrument it had used to try to identify the speaker? If the other platforms it demanded information from were, quite reasonably, cowed by an accompanying demand for secrecy (the sanctions for violating such an order can be serious), we might never know the answers to these questions. The government could be continuing its attacks on its apparently no-longer-anonymous critics unabated, and speakers who depended on anonymity would unknowingly be putting themselves at risk when they continued to speak. This state of affairs is an affront to the First Amendment. The First Amendment was intended in large part to enable people to speak truth to power, but when we make it too hard for platforms to be partners in protecting that right it entrenches that power. There are a lot of ways that platforms should have the ability to be that partner, but one of them must be the basic ability to tell us when that right is under threat. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
I'm not sure who Dianne Feinstein thinks she is, but she's going after Twitter users' private communications. As part of the ongoing hearings into Russian interference in the election process (specifically marketing efforts by Russian troll armies), Feinstein has asked Twitter [PDF] to hand over a bunch of information. Most of the demands target Twitter itself: documents related to ad campaigns, investigative work by Twitter to uncover bot accounts, communications between Twitter and Russian-connected entities, etc. Then there's this demand, which doesn't ask Twitter to turn over communications from Twitter, but rather users' private messages. All content of each Direct Message greater than 180 days old between each Requested Account contained in Attachment A and any of the following accounts: A. @wikileaks (https://twitter.com/wikileaks, 16589206); B. @WLTaskForce (https://twitter.com/WLTaskforce, 783041834599780352); C. @GUCCIFER_2 (https://twitter.com/GUCCIFER_2, 744912907515854848); D. @JulianAssange_ (https://twitter.com/JulianAssange, 181199293); E. @JulianAssange (https://twitter.com/JulianAssange, 388983706): or F. @granmarga (https://twitter.com/granmarga, 262873196). 15. For each Direct Message identified in response to the preceding requests, documents sufficient to identify the sender. receiver, date, and time each message was sent. Feinstein's acting like she can use the ECPA's "older than 180 days" trick -- most commonly applied to emails -- to obtain private communications between Twitter users. That's not really how this works. Law enforcement can demand these with a subpoena, but a non-law enforcement entity can't. Feinstein isn't a law enforcement officer. She's a Senator. There's no reason for Twitter to comply with this part of the order. In fact, it may be illegal for Twitter to turn these communications over. The Stored Communications Act forbids service providers from handing out this information to anyone without a warrant. If Feinstein really wants these communications, she'd better turn this into a law enforcement investigation and have someone obtain the proper judicial permission slip. Feinstein knows this part of the request is a bit off. That's why she attempts to minimize the multitude of problems in her request with this: While I recognize that this type of information is not routinely shared with Congress, we have sought to limit the requests to communications only with those entities identified as responsible for distribution of material that was unlawfully obtained through Russian cyberattacks on US computer systems. This would seem to indicate an actual investigation involving actual law enforcement agencies is a possibility. If so, demands for private communications with these accounts can wait for an actual search warrant. If not, Twitter is well within its rights to refuse her request. This request will sweep up all sorts of communications from accounts not currently under investigation, either by the Senate subcommittee or any US law enforcement agency. It's more than just the six accounts listed -- even though each of those may have received hundreds of Direct Messages. There's another list -- Exhibit A -- that hasn't been made public. Any perceived violations of privacy laws witnessed here have the chance to grow exponentially should Feinstein somehow coax Twitter into turning over these messages. This is a stupid and dangerous request from a public servant who should know better. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
As you probably have heard, last night for a period of 11 minutes, Donald Trump's Twitter account looked like this: Not surprisingly, lots of people noticed quickly... and, then it came back. Soon after, Twitter admitted it was "inadvertently deactivated due to human error by a Twitter employee." Two hours later, this message was clarified to say that "done by a Twitter customer support employee... on the employee's last day." This, in turn, led a bunch of folks on Twitter to start gleefully praising this employee (whose name is not yet known, but likely will be soon). Because it's Twitter, and Twitter can get giddy over stuff like this, there were lots of jokes and people calling this employee a hero and whatnot. (Update: A new report says that it wasn't even a full-time employee, but a contractor). I take a very different view on this. Earlier this year, Cathy Gellis wrote a post here explaining why it would be a bad idea to kill Trump's Twitter account. You can read that post for details, but the larger point is that under no circumstances would such a move be viewed as anything other than a political statement. Twitter more or less admitted this a few weeks back when it made a public statement saying that it considers "newsworthiness" as a factor in determining whether a tweet violates its terms. And, by definition, the President's tweets are newsworthy. The larger question, honestly, is how the hell a customer service rep, especially one who wasn't even a full time employee, but a contractor -- on his or her last day -- had the power to simply delete the President's twitter account. You can see how things got to this point: I'm sure in the early days, just about anyone could delete someone's account on the platform. Over time, I assume that the power was limited more and more to customer service reps -- but they were still granted the power to do so if it was necessary. But it's fairly incredible that there aren't at least some controls on this -- requiring a second person's permission? Locking certain key Twitter accounts? -- that would make what this employee did impossible. And, of course, it's raising lots of other questions. Did this customer service rep have the ability to tweet as Trump? Considering how quickly the world reacts to Trump tweets, that could create serious havoc. I'm sure we'll be hearing plenty more on this soon, and Twitter will eventually share some sort of post mortem on new processes and controls that have been put in place, but the fact that this even happened in the first place is not a cause for celebration, but one for concern about how Twitter's controls and processes work. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Build your own drone or race car with the Force Flyers DIY Building Block Fly 'n Drive Drone for only $33. The 6-axis gyro, 360º stunt flips, and auto-stabilization make flying fun for any level of flyer. Its crash-resistant ABS plastic lets you get back on the road or in the air faster after crashing, and features a flight time of 10-12 minutes on one charge. The kit is compatible with major building blocks so you can let you imagination run wild building different vehicles. It's great for STEM development and fun for all ages. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
It appears that Senator Rob Portman has decided to push forward with SESTA -- the Stop Enabling Sex Traffickers Act -- a bill with problems we've discussed in great detail. Despite previous suggestions that the bill would not move forward until there were important fixes in place, it's now been announced that a committee vote will happen next week. It's possible that the bill will be amended prior to that vote, but as of right now, that's not clear. In support of this renewed push, Portman has published an opinion piece at Wired that no fact checker should have allowed. It is fully of completely faulty statements, and fairly incredible ones at that. It's kind of scary that it appears that Portman may be looking to undermine some fundamental principles of how the internet works based on a bunch of false statements. Even the title is just wrong: How Federal Law Protects Online Sex Traffickers It doesn't. Federal law is clear that law enforcement can go after sex traffickers. There is nothing in SESTA about going after sex traffickers. SESTA is entirely about going after internet platforms because someone may have used them in the process of "facilitating" sex trafficking. It is a stain on our national character that sex trafficking is increasing in this country, in this century, and experts say it is happening because of the internet and the ruthless efficiency of online sex trafficking. So much to unpack in this one sentence. As discussed earlier, the supposed epidemic of sex trafficking is grossly exaggerated. That is not to say it doesn't happen -- because it quite clearly does, and when it does happen, it's a serious problem. But Portman specifically has massively exaggerated the scope of the problem, and when you do that it's easier to support ridiculously overbroad "solutions" that would actually turn the small problem into a much bigger problem. Sex trafficking has moved from the street corner to the smartphone, and online sex trafficking has predominately occurred through one website: Backpage.com. And, as explained multiple times, if Backpage has, in fact, broken the law, there are already laws to deal with it. Backpage itself has already shut down its adult section (which only became big after politicians pressured Craigslist to do the same -- suggesting that chasing traffickers from platform to platform won't do much to stop trafficking). Just a few years ago Congress passed the SAVE Act, specifically targeting Backpage. It has not been used. Instead of passing another law, perhaps Portman should be asking why it hasn't been used? Similarly, CDA 230 has never covered federal crimes. The DOJ has always been able to target Backpage if it was violating the law (and it's been reported that the DOJ already has a grand jury investigation going into Backpage's actions). So why do we need a new law? Headlines tell the tragic stories: In March 2013, police reported that a Miami pimp forced a teen to tattoo his name on her eyelids. In June 2017 in Chicago, feds charged a man for prostituting a 16-year-old girl before her murder. That same month, three people were accused of pimping a pregnant teen for sex. All of these stories are horrible. But all of them involve criminals who were caught, with some of the evidence coming from Backpage. So, it seems like a bigger question may be: why aren't the police working harder to scan Backpage for evidence of these crimes and stopping them earlier? In the past, we've noted that law enforcement has successfully used these sites as tools to track down pimps and traffickers. It seems so odd to focus all of the attention on websites rather than the actual traffickers. It's almost as if Portman is trying to take the blame away from the traffickers themselves. These heinous crimes, and countless others, involve Backpage, and yet the website has repeatedly evaded justice for its role in child sex trafficking. Because Backpage wasn't doing the trafficking. And again, if they were, law enforcement is already able to go after the platform. Despite these facts, courts have consistently ruled that a federal law called the Communications Decency Act protects Backpage from liability for its role in sex trafficking. This 21-year-old law was designed to ensure websites aren’t held liable for crimes others commit using their website. The legislation has an important purpose, but now, because of broad legal interpretations, it is used as a shield by websites that facilitate the sale of women and children for sex. This is both wrong and a sleight of hand. First, no court has said that Backpage is protected from its role in sex trafficking. If Backpage is actively involved in the sex trafficking itself, it does not qualify for CDA 230 protections. And, again, even without that, there is no immunity in CDA 230 for federal crimes (and, again, the DOJ has a grand jury going on this). The fact that Portman is so proactively misrepresenting nearly everything should worry people. Similarly, sites are not using CDA 230 "as a shield," but to make it clear that the focus should be on actual traffickers rather than on the tools they use. Traffickers use cars as well. Will Portman's next bill be holding Ford and GM responsible for any trafficking that uses cars? The Communications Decency Act should not protect sex traffickers who prey on the most innocent and vulnerable among us. IT DOES NOT. It never has. Nothing in CDA 230 protects traffickers. Traffickers are violating the law and law enforcement has every right to go after them. Hell, the three examples that Portman presented above all involve traffickers arrested by law enforcement. That seems to contradict his own point. I do not believe those in Congress who supported this bill in 1996 ever thought that 21 years later, their vote would allow websites to knowingly traffic women and children over the internet with immunity. Again, if it the sites themselves are involved in the trafficking, then CDA 230 already doesn't cover them. However, courts and attorneys generals have made it clear that their hands are tied. In the most recent example, in August, a Sacramento judge threw out pimping charges against Backpage because of the liability protections afforded by this 1996 law, and he invited Congress to fix this injustice. No. Their hands are tied in prosecuting Backpage without evidence of Backpage itself breaking the law. That's different. It doesn't tie their hands in prosecuting actual traffickers. And it doesn't stop them from prosecuting Backpage for evidence of actual crimes. Notice how Portman conveniently leaves out that while the judge in Sacramento threw out the pimping charges (because Backpage isn't doing the pimping) it let the case move forward on money laundering claims. In other words, Backpage is still in court, despite Portman implying that the entire case was dismissed. This injustice is why I, along with more than two dozen of my colleagues from both sides of the aisle, introduced the Stop Enabling Sex Traffickers Act. The bill would do two things. First, it would allow sex trafficking victims to get the justice they deserve by removing the law’s unintended liability protections for websites that knowingly facilitate online sex trafficking. Second, it would allow state and local law enforcement to prosecute websites that violate federal sex trafficking laws. Portman is misleading in his description of his own bill. It does not only apply to those who "knowingly facilitate online sex trafficking." The "knowledge" standard in the bill is extraordinarily broad, covering "knowing actions" that are then used to facilitate sex trafficking. The distinction may be subtle, but it's huge. It means that a platform just needs to know about what its service can do, not the outcome. Wikipedia "knows" that people can add links to its online encyclopedia. It doesn't "know" when someone advertises sex trafficking via such a link. But under the current standard in the bill, that doesn't matter. The language about knowledge of how a service works, rather than the illegal activities, is a real problem with this bill. The bill will achieve these ends without threatening the years of progress we have made in creating a free and open internet. He says this despite the fact that nearly every internet company and expert says he's wrong. And plenty of sex trafficking experts as well. Just this week, a sex trafficking expert who helped write the State Department's own report on sex trafficking has said that this would create huge harms for the internet and for victims of sex trafficking. The standard for liability in our bill is a high bar to meet. This is simply incorrect. Multiple tech and legal experts have explained this over and over again. Simply saying there's a high bar does not make it true. The plain language of the bill shows that the bar is extraordinarily low. Some in the tech community incorrectly claim that this bill will expose innocent websites to frivolous lawsuits. But my Senate colleagues and I carefully crafted this legislation to remove immunity only for websites that can be proven to have intentionally facilitated online sex trafficking. Again, he can repeat this false claim as much as he wants, and it still doesn't make it true. The language in the bill is clear. If he wants it to only target those who have "intentionally facilitated online sex trafficking" he needs to change the language. Daphne Keller, at Stanford, just released a paper this week on ways to fix SESTA, and someone should send a copy to Portman. There are already exemptions in the Communications Decency Act’s liability protections for intellectual property violations that exist without undermining the fundamental intentions of the law. It is unreasonable to suggest the result of a narrowly tailored exemption against knowing sex traffickers would be any different. This is the most frustrating line in the entire piece. If Portman had the slightest bit of understanding about how the DMCA's notice-and-takedown provisions are routinely abused to censor the internet, there's no way he'd claim that it hasn't "undermined" anything. The fear here is that SESTA creates a kind of DMCA notice-and-takedown on steroids, because it adds possible criminal penalties. From the description here, it almost appears as if Portman doesn't even know that DMCA safe harbors exist for copyright, or that the lack of CDA 230 coverage for trademarks created a massive influx of court cases until eventually the courts effectively said that there was a DMCA-like safe harbor over trademarks as well. In short, Portman seems to be either making an argument out of pure ignorance, or intentionally misrepresenting what's happening on the intellectual property side of the fence. We have a moral responsibility to protect the most vulnerable among us and combat this injustice. Every day we wait is too late for countless vulnerable women and children. And yet, absolutely nothing in SESTA actually protects those victims. They will still be trafficked. The bill only targets internet companies providing platforms that traffickers use. They will keep using the internet to traffic, even if badly targeted lawsuits take those companies down. Indeed, it's likely that they'll move to platforms that make it more difficult for law enforcement to figure out what's going on. SESTA will make the problem worse, not better, and will create tremendous collateral damage in the meantime. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
The boring old utility pole has long been at the heart of this country's broadband dysfunction. As it stands now, competing ISPs looking to deploy fiber need to contact each individual ISP -- and wait for them to finalize layers of paperwork and move their own gear -- before the competitor can attach fiber to the pole. Needless to say, ISPs have often abused this bureaucracy to stall competitors' arrival to market. So over the last few years Google Fiber has convinced several cities to pass "one touch make ready" utility pole reform rules that dramatically streamline this process. Under these reforms, one licensed, insured contractor (often the same company ISPs already use) is allowed to move any ISPs' gear -- provided they inform the ISP ahead of time and pay for any potential damages. The regulatory change can dramatically speed up fiber deployment, saving numerous months in project delays. That's why Google Fiber convinced cities like Nashville and Louisville to pass these one touch rules a few years ago. But Nashville and Louisville were subsequently sued by Comcast, Charter and AT&T. The ISPs' lawyers threw out every legal argument they could, including claims that the cities had exceeded their legal authority, that the reforms would dramatically increase service outages, and even that the reforms violated their first amendment rights. Of course the ISPs' real problem is that such reform speeds up the arrival of a concept regional duopolies loathe: actual, genuine competition. In this case, AT&T's gambit didn't work all that well. Back in August, a Judge killed off AT&T's lawsuit against Louisville, stating the city was well within its legal authority to manage the city's own rights of way (even though AT&T owns 40% of the poles in the city). AT&T appears to have gotten the message, as the telco told news outlets there this week they wouldn't be appealing the ruling: "AT&T will not appeal a federal judge’s ruling upholding a local law Louisville Metro passed last year to make it easier for new Internet providers like Google Fiber to access utility poles in the city. AT&T spokesman Joe Burgan confirmed the company decided not to appeal U.S. District Judge David Hale’s August 16 ruling upholding the so-called “One Touch Make Ready” ordinance. The lawsuit still had its intended effect in delaying Google Fiber in Louisville while AT&T worked to lock existing customers there into long-term contracts. Google Fiber meanwhile has been forced to pivot from fiber to wireless/fiber hybrid deployments in part to get around these lawsuits. But the company also managed to use techniques like microtrenching (which involves using machines that bury fiber just a few inches below the road's surface) instead of having to rely on access to utility poles. It's worth noting that a similar Charter lawsuit against Louisville, and AT&T and Comcast lawsuits against Nashville are still pending. Instead of offering better, faster, cheaper service, these companies' first instinct is almost always to either file nuisance lawsuits, or to quite literally buy state laws that make life harder on would-be competitors. And while you'll often see incumbent broadband duopolies and their policy cronies crying incessantly about "burdensome regulation" while pushing for blind deregulation, the reality is these companies adore regulation -- just as long as it hurts the other guy and slows any attempt to bring competition to bear on a broken market. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Earlier this year, Canada's top court upheld a ridiculous, truly troubling ruling involving a company called Equustek Solutions. Equustek managed to get three consecutive courts to agree they had jurisdiction to force Google to block supposedly-infringing websites worldwide. It was a rare show of audacity from the usually ultra-polite country. According to the court's reasoning, the only way to prevent continued "irreparable harm" to the plaintiff was to order Google to prevent anyone, anywhere in the world from accessing the site. That the court had no jurisdiction beyond the Canadian borders was treated as irrelevant. Google responded to this insane ruling by filing a lawsuit in its own state, asking a judge to find the Canadian court's overreach unenforceable in the United States. It cited both Section 230 of the CDA and the First Amendment in support of its arguments. This could have provided for some very interesting courtroom arguments. But, alas, it appears Equustek has no interest in presenting its case anywhere it doesn't have the homefield advantage. Joe Mullin of Ars Technica has more details: It looks like Google is going to win that case, but not as a result of any high-minded legal arguments. Its opponent simply failed to show up. In a motion (PDF) filed Tuesday, Google said that Equustek CEO Robert Angus faxed Google's lawyers a letter "stating that Defendants would not be defending this action." Equustek hasn't hired a US lawyer or shown up to any court proceeding, so Google will move for a default judgment. The company will then ask for a permanent injunction, preventing the Canadian order from being enforced in the US. Given the authorities cited by Google in its lawsuit (Sec. 230, First Amendment), it's likely to obtain this permanent injunction. It likely would have obtained it anyway, even if Equustek hadn't chosen to opt out of the litigation. Equustek knows this, which is why it's not willing to spend any of its money fighting a losing battle. Hopefully, the court will have a few things to say about the Canadian court's overreach when it hands this (admittedly easy) win to Google. It's all well and good to use home courts to grant you injunctions based on local law. It's absolutely appalling when a court decides it can demand compliance far outside of its jurisdiction. [Addendum: shortly after this post had gone to bed (but fortunately before I had) the court's ruling arrived. As expected, the court [PDF] finds in favor of Google, pointing to its Section 230 immunity. First, there is no question that Google is a “provider” of an “interactive computer service.” See 47 U.S.C. § 230(f)(2) (“The term ‘interactive computer service’ means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.”); O’Kroley v. Fastcase, Inc., 831 F.3d 352, 355 (6th Cir. 2016) (“Google is an interactive computer service, an entity that provides ‘access by multiple users to a computer server.’ ”); Parker v. Google, Inc., 422 F. Supp. 2d 492, 501 (E.D. Pa. 2006) (“[T]here is no doubt that Google qualifies as an “interactive computer service.”); Gonzalez v. Google, Inc., No. 16-cv-03282-DM, 2017 WL 4773366, at *9 (N.D. Cal. Oct. 23, 2017) (finding that Google is a provider of an interactive computer service). Second, Datalink—not Google—“provides” the information at issue. Google crawls third-party websites and adds them to its index. When a user queries Google’s search engine, Google responds with links to relevant websites and short snippets of their contents. Id. Google’s search engine helps users discover and access content on third-party websites, but it does not “provide” that content within the meaning of Section 230... Third, the Canadian order would hold Google liable as the “publisher or speaker” of the information on Datalink’s websites. The Supreme Court of Canada ordered Google to “de-index the Datalink websites” from its global search results because, in the Court’s view, Google is “the determinative player in allowing the harm to occur” to Equustek... The Canadian order treats Google as a publisher because the order would impose liability for failing to remove third-party content from its search results. Google meets the requirements for Section 230 immunity. As such, the Court finds that Google is likely to prevail on the merits of its Section 230 argument. Likewise, the court finds Google would be harmed by the Canadian court's decision. Google is harmed because the Canadian order restricts activity that Section 230 protects. In addition, the balance of equities favors Google because the injunction would deprive it of the benefits of U.S. federal law. [...] An injunction would also serve the public interest. Congress recognized that free speech on the internet would be severely restricted if websites were to face tort liability for hosting usergenerated content. It responded by enacting Section 230, which grants broad immunity to online intermediaries. The short opinion closes out with a few choice words for the overreaching Canadian court. The Canadian order would eliminate Section 230 immunity for service providers that link to third-party websites. By forcing intermediaries to remove links to third-party material, the Canadian order undermines the policy goals of Section 230 and threatens free speech on the global internet. The First Amendment question goes unexplored because Section 230 immunity already provides Google with all it needs to secure an injunction. But the coda on the decision makes it clear the First Amendment question wouldn't go the Canadian court's way.] Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
The last time we talked about Germany's Strafgesetzbuch law, specifically section 86a that prohibits the display of Nazi symbols, iconography, or historical figures with few exceptions, was when Ubisoft accidentally sent the country versions of a South Park video game chock full of swastikas. I feel much the same today about the law as I did then: I get why the law was created, but it's probably time for it to be retired. While the law does make room for Nazi symbols to be displayed for the purposes of art and education, too often those exceptions are either not actually adhered to in real-world examples, while those that might be able to fit their work within those exceptions don't bother trying, too chilled by the law that limits their speech. Coupling that along with the simple fact that German citizens who really want to see Nazi symbols don't have to work particularly hard to circumvent the law resolves the whole matter as being somewhat silly. And it produces silly results. For instance, the latest game in the Wolfenstein series got around the law with what appears to be the minimum amount of effort possible. The German Strafgesetzbuch section 86a outlaws the use of Nazi symbols as part of the denazification of the country post World War II. This law covers not only symbols like the swastika, but gestures like the Nazi salute. It doesn’t explicitly prohibit depictions of Adolf Hitler, but nevertheless, Hitler’s appearance in Wolfenstein 2: The New Colossus has been censored: they took his mustache off. Other than barely changing the Nazi symbols in the game and removing Hitler's initials from what looks to be a monogrammed smoking jacket, that's pretty much it. Compliance with the law resulted in the removal of a 'stache. Meanwhile, anyone playing the game with it's World War 2 themes will know exactly who they are seeing: Hitler. When a law, well-meaning or not, requires its citizens to be criminally stupid for it to be of benefit, it should be obvious that the law is broken. And it would take someone without a functioning brain to play this scene in this specific game and not realize that Hitler was on the screen. That makes the law useless at anything other than forcing us to notice how much Hitler could have looked just like our own Uncle Larry and causing us to have to deal with that reality. Again, I understand why the law was created. Even so, it's time to sunset it. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
The Sixth Circuit Court of Appeals has let some more stash house sting convictions stand. But not without considerable discussion of the government's tactics. And not without one judge appending a long rebuke to her reluctant concurrence. Once again, the ATF has managed to secure multiple convictions predicated on nonexistent evidence. The sting, helmed by veteran ATF agent Richard Zayas, involved a made-up drug stash house "containing" at least enough drugs to trigger 10-year mandatory minimum sentences for the defendants. Zayas' sting operations always include fictitious armed stash house guards, otherwise the ATF's involvement would be unnecessary. The end result is multiple convictions. But other than a few seized weapons, nothing contributing to public safety was achieved. No actual drug dealer was targeted, nor was the sting linked with any larger ATF/DEA/FBI operation aimed at curbing inner city drug trade. Nonetheless, the Sixth Circuit Appeals Court upholds everything, rejecting multiple due process challenges from the defendants. The entire opinion [PDF] should be read just to understand the nearly-insurmountable barriers defendants face when challenging questionable government behavior -- both during the sting and during the trial. Judge Jane Stranch's concurrence clearly communicates her displeasure with ATF sting operations in general, even if it's tempered by her inability to move the dial in the appellants' favor. Because these stings are wholly inventions of law enforcement agents, they can and do include powerful inducements to participate in one big “hit,” a hit that is conveniently large enough to qualify for mandatory minimum sentences. Obtaining the outsized reward is also made to look easy—the agent is a disgruntled insider who knows when and how to stage these “rip-and-runs” and offers to provide all needed assistance, from manpower to transportation. The unseemly nature of the Government’s activity is emphasized by its failure to achieve its declared goals of jailing dangerous criminals and making our streets safer. Evidence showing that these hurry-up set-ups achieve the stated goals was not proffered and the facts here demonstrate why: no known dangerous individuals or criminal enterprises were researched or targeted and no pre-existing drug rings or conspiracies were broken up. In fact, this sting trapped Flowers, a gainfully employed young man with no criminal record. This sting was like others helmed by Agent Zaya: it targeted impoverished inner city minorities. As the judge notes, the fact that ATF stings are disproportionately resulting in the jailing of minorities has not gone unnoticed. It's not just dissertations or investigations by journalists exposing this fact. The ATF is currently facing a lawsuit in Illinois over the selective targeting shown in sting operations. Stranch goes on to note multiple courts have found the ATF's actions troubling. But, so far, they've been unable to do much to stem the flow of stash house sting cases into the nation's courts. They've also been unable -- with rare, rare exceptions -- to provide any sort of relief for defendants caught up in the government's fictitious drug robbery plans. Despite increasing awareness of the problems and inequities inherent in fictitious stash house stings, at issue here is whether an appropriate legal path exists for a defendant to successfully challenge the stings. A majority of circuits have recognized the outrageous government defense, but impose such a high burden on defendants that the defense rarely results in dismissal of charges. [...] [I]t seems we remain without an established vehicle in the law to define a dividing line between law enforcement practices that are honorable and those that are not. In the interim, these questionable schemes continue to use significant government resources and to adversely impact the poor, minorities, and those attempting to re-integrate into society. And they apparently do so with no increase in public safety and no deterrence of or adverse effect on real stash houses. These costly and concerning sting operations do not accord with the principles of our criminal justice system and I hope they will be discontinued. The ATF continues to spend considerable amounts of money doing little to stop the flow of contraband. It would rather chalk up easy arrests and convictions while doing almost nothing to contribute to public safety. Taxpayers are already paying the ATF to engage in literal charades. They're also on the hook for hundreds of thousands of dollars in incarceration costs per sting victim thanks to the ATF's insistence on pretending there's mandatory minimum-triggering amounts of nonexistent drugs in every fake stash house it convinces someone to rob. This is nasty, brutish work by the government. But it works too well to expect the ATF to voluntarily end this program. It produces too many convictions to be considered a waste of time by the ATF, even as its does nothing at all to stop the trafficking of drugs and guns. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
A Canadian court has ruled information about law enforcement's not-all-that-secret cell tower spoofers can stay secret. An ongoing attempted murder trial has implicated the use of Stingray devices. Prosecutors have refused to turn over information about the devices to the defendants -- something that at first provoked some consternation from the presiding judge. (via Slashdot) Court of Queen's Bench Justice Glen Poelman initially agreed with defence lawyers Kelsey Sitar and Clayton Rice and granted them the right to question the CPS officer involved in using the MDI regarding its make, model, features and the circumstances that may or may not affect its use. Unfortunately, prosecutors were able to sway the judge's opinion during an in camera briefing. The government invoked part of the Canada Evidence Act, granting it an apparent disclosure exemption on the theory handing over make and model information would be "contrary to the public interest." Poelman has ruled the police investigative techniques are privileged, and he prohibited the release of the make, model and software of the MDI as well as "any further information which would have the effect of disclosing the technique by which MDI obtains cellphone identifier information." This may end the line of discovery as it relates to law enforcement's IMSI catchers, but it doesn't necessarily mean the prosecution will be able to move forward. The defense plans to challenge the lawfulness of the prosecution itself. Withholding evidence possibly crucial to the defense doesn't make for a fair trial and it appears the defense will argue charges should be dropped if information isn't going to be produced. It's not like there isn't any precedent to work with. Earlier this year, the government chose to let 35 accused Mafia members go free rather than discuss Stingray use in court. Clayton Rice, who is representing one of the accused in this case, has graciously sent over a copy of the court's ruling [PDF] on the issue. (This ruling was under a publication ban until mid-morning Tuesday.) Rice points out this is only an interim ruling and doesn't necessarily represent the final word on the subject. The court has granted the government the (possibly temporary) right to withhold certain information about its cell tower spoofers, which includes its make and model. The order is heavily redacted, which is one of the reasons it's only now being released despite having been decided back in August. What can be sussed out from the redacted discussion is that the Calgary Police do not possess an actual Stingray -- the sort made by Harris Corp. That much is made clear in the ruling. The method used for tracking phones is also withheld, even though the technique used by the CPS has apparently been discussed publicly before. But you won't be able to find that information in the court's decision. [I]t could be argued that all elements of CPS's MDI investigative techinique are publicly known. However, the Crown argues that it is not known that the CPS's MDI uses the [redacted] method, and the fact that information about the [redacted] procedure may be publicly accessilbe is not the same (especially in the internet age) as a police service verifying is accuracy or confirming publicly that this is the procedure they use. It is not necessarily well-known. The only public information about which Sgt. Campbell is aware that discussed the [redacted] technique is the [very lengthy redaction to end of paragraph.] For the time being, however, the Calgary Police's cellphone interception hardware will remain a mystery. The question now is whether that desire for secrecy will cost the Crown its prosecution. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
FCC boss Ajit Pai has been busy ignoring the public while he kills popular net neutrality rules. But he's also been working hard to weaken broadband deployment standards to obfuscate a lack of broadband competition, to gut programs that provide broadband to the poor, killing previous FCC efforts to improve cable box competition, to protect prison telco monopolies from oversight, and to make it easier for business broadband monopolies to rip off smaller competitors. All while proclaiming to be a stalwart defender of the little guy and a champion for bridging the digital divide. But Pai has also been taking heat for his pursuit of another pet project: gutting media consolidation and ownership rules solely for the benefit of Sinclair Broadcasting, which is seeking approval for its $3.9 billion bid for Tribune. In the last few months, Pai has, as promised, been "taking a weed whacker" to rules intended to protect local reporting, media competition, and opinion diversity. That has included killing an 80 year rule intended to protect local competitors and journalism from unchecked monopoly control of a market, and taking an axe to some protections but bringing back others solely to Sinclair's benefit: "On Tuesday, the FCC eliminated a requirement for broadcasters to keep a local studio. A day later, Pai called for easing ownership restrictions, potentially taking pressure off Sinclair’s $3.9 billion deal for Tribune Media Co.’s TV stations. Earlier, he had restored an obsolete rule, making the deal possible. On Thursday, the agency moved toward blessing a new broadcasting standard that may enrich Sinclair as it offers viewers sharper pictures." As he prepares to axe yet more media consolidation protections over the coming months, Pai has trotted out the growing power of Google and Facebook as partial justification for eliminating rules he declares no longer necessary: "The marketplace today is nothing like it was in 1975. Newspapers are shutting down. Many radio and TV stations are struggling, especially in smaller and rural markets. Online competition for the collection and distribution of news is even greater than it ever was. And just two Internet companies [Google and Facebook] claimed 100 percent of recent online advertising growth. Indeed, their digital ad revenue alone this year will be greater than the market cap of the entire broadcasting industry. And yet the FCC's rules still presume that the market is defined entirely by pulp and rabbit ears." Obviously the argument that "Google and Facebook are big" and therefore media consolidation rules are unnecessary doesn't hold a whole lot of water. And while it's true that many newspapers and local news outlets are "struggling," that's more a failure of adaptation than a justification for gutting media consolidation restrictions that still aid smaller, regional news outlets. Unsurprisingly, fellow FCC Commissioners like Jessica Rosenworcel have called for an investigation into Pai's giant, sloppy kiss to Sinclair: "It has reached a point where all of our media policy decisions seem to be custom-built for this one company," Jessica Rosenworcel, a Democratic FCC member, said Wednesday at a congressional hearing. "It’s something that merits investigation.” This mindless obsession with mergers and consolidation (with little thought as to the impact on markets or competition) has been a hallmark of the Trump administration. But opposition to this growth-for-growth's sake has been increasingly bipartisan in nature, with many smaller Conservative outlets worried they'll be unable to compete with giants like Comcast NBC Universal, Sinclair/Tribune, and soon AT&T Time Warner. Smaller organizations like the American Cable Association (ACA) applauded and supported Pai's rise to power, now seem surprised as his policies focus almost exclusively on aiding the biggest and wealthiest companies: "ACA urges the Federal Communications Commission to deny the Sinclair-Tribune transaction because it would violate existing FCC rules while at the same timing failing to meet the obligation to demonstrate it would serve the public interest. Even if the transaction were not per se unlawful, it would create a broadcasting behemoth with unprecedented control over both the national and local television markets,” ACA President and CEO Matthew M. Polka said." Whether it's gutting net neutrality solely for the benefit of a few giant ISPs, or gutting media consolidation rules exclusively to aid one giant media empire, Pai's legacy at the FCC will be one of brutal myopia, obfuscated by tall tales about his relentless dedication to the little guys he seems blatantly intent on ignoring. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
While it wasn't always called "the cloud" people have been talking about and predicting the future of remote computing for the past few decades (and, sure, I know that in the early days of mainframes and terminals, that's how things worked, but I'm talking about in the modern internet era). And some argue that we've now finally reached the true age of the cloud. After all, tons of people can survive with most of their documents really stored in the cloud. Indeed, for many people, they have little use for much storage on their own computers (and, sure, I know some of you will get snooty and talk about how crazy that is, but the simple fact is that many people are not like you and don't need much in the way of local storage). But, as I've said before, and will say again, I think by letting companies like Google and Amazon control "the cloud" we've actually missed out the real possible benefit of the cloud. The version that I had always pictured separated out the storage layer from the service layer. I've made this point in the past concerning online cloud music services (which are now pretty obsolete due to streaming services) where I'd prefer the ability to store all of my (legal) MP3s in one spot, and then point a music playing service at those files. Instead, every cloud music service required you to upload local tracks to servers somewhere, and you'd have to do it all over again if you switched. This is obvious lock-in for those services, but it's a pain for end users, and diminishes the possibilities for more innovative services. The same is true in other areas as well. And I'm reminded of this due to a bug in Google Docs that hit some people earlier this week. When people went to access their docs, they were told they were locked out due to a "terms of service violation." This turned out not to be true (Google just fucked up in a way that "incorrectly flagged a small percentage of Google Docs as abusive, which caused those documents to be automatically blocked"). And, while this was a stupid mistake (that legitimately freaked out a bunch of people who rely on Google docs), it again highlights the problem. Google Docs is a fantastic and useful service. But it would be a hell of a lot better if the service layer and the storage layer were separated. In the bad old days when I used Microsoft Word, I wouldn't want that app shutting down because it thought someone wrote an "abusive" letter. Why should that even be an option in Google Docs? And why should Google run both the service and the storage part? Why can't I store the doc somewhere else, and just point Google Docs to that storage, such that I can still get the same service, but Google has no right to deny me access to the documents I make? Again, I understand the business logic behind this (lock-in!) and even some of the legal logic behind this (for example, in my music example, I'm sure that some would argue that a service playing from an accessible data store of (even legal) MP3s would infringe). But, out of all that, it feels like we've really missed out on the true promise of the cloud, in which we separate out the services from the data, and allow more and varied services to compete, without also claiming ownership and having the ability to block access to the data. This SNAFU with Google Docs only serves as another reminder of how problematic this can be. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
You spend a lot of your day at your desk while your devices just continue to lose power. Keep them powered up so you won't have to think about it when you leave work with this ZeroLemon 75W Desktop Charger. It includes a USB Type-C port, 2 standard USB-A ports, and a PD/QC3.0 compatible port, and its built-in intelligent chip allows simultaneous multi-device charging at high speed. It is on sale for $37. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
As you may have heard, this week there were three Congressional hearings in two days, allowing various Congressional committees to drag out officials from Facebook, Twitter and Google and slap them around for the fact that some bad things happen on those platforms. The general sentiment appeared to be sputtering anger that social media companies haven't magically figured out how to "stop bad stuff" on these platforms. Perhaps the strongest statement came from Senator Dianne Feinstein during one of the hearings, in which she stated: I must say, I don't think you get it. You're general counsels, you defend your company. What we're talking about is a cataclysmic change. What we're talking about is the beginning of cyber warfare. What we're talking about is a major foreing power with the sophistication and ability to involve themselves in a presidential election and sow conflict and discontent all over this country. We are not going to go away, gentlemen. And this is a very big deal. I went home last night with profound disappointment. I asked specific questions, I got vague answers. And that just won't do. You have a huge problem on your hands. And the US is going to be the first of the countries to bring it to your attention, and other countries are going to follow, I'm sure. Because you bear this responsibility. You created these platforms, and now they're being misused. And you have to be the ones who do something about it... or we will. We've gone over this before, because it's one of those things that everyone seems to think is easy to solve, when the reality is that most attempts to solve the "problem" of "bad stuff" results in a bigger problem. Yes, it's probably true that these companies could be more forthcoming and transparent, but part of the problem is not that these companies just want to hold their cards close, it's that (1) there are no easy answers and (2) almost any "solution" is fraught with even more problems that will almost certainly make the problem worse. At the very same time that tons of people are complaining about these platforms failing to stop loosely defined "bad speech," you have another group that is complaining about bad/bogus takedowns/censorship. How do you balance those two things? If you think there's an easy way, you're wrong. On top of that, the idea that "bad" content is obvious is ludicrous on multiple levels. First, the scale of this issue is massive. And that impacts things in multiple ways. It means it's impossible to carefully review every piece of content, meaning that a ton of "bad" stuff will always slip through and people will complain that the platform is failing or not taking the issue seriously. At the same time, a bunch of errors in the other direction will be made (taking down stuff that should be left up). It's the classic issue of Type I and Type II errors -- and at the scale these platforms operate, you will inevitably have so many of both as to make the entire effort appear completely ineffective. And, to make the situation even more ridiculous, even if there were some regulatory regime that could accurately manage the issues discussed above, they would almost certainly be cost prohibitive for all but the largest of players. And thus, the end result of this regulatory "attack" on Facebook, Google, and Twitter may be to lock in those three companies as the dominant players and lock out any innovative startups. And then there's this: While these Senators were attacking these three companies, they were relying heavily on Twitter and Facebook to talk up and promote the fact that they were in a hearing bashing Twitter and Facebook. While the article linked here suggests that this isn't ironic because it just demonstrates the power imbalance, there's a more subtle issue at play. These platforms became so useful in large part because they were free to innovate and to experiment and to allow for lots of different uses. And, sure, some of those uses are ones that many of us find distasteful, offensive, or even potentially dangerous. But before we leap in with wild abandon with Congress mandating solutions that will be policed by these very same platforms, shouldn't we be at least a little careful that the end result will create a lot more problems than it's supposedly solving? And, yet, so far, there has been little indication of what exactly Congress (or anyone with the anti-tech pitchforks) have in mind other than "take responsibility" or "stop the bad stuff." And that's not even remotely productive, and has a high likelihood of being harmful. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
We've been discussing how Sprint's plan to merger with T-Mobile would be notably awful for the wireless industry. Not only do Wall Street analysts predict it would kill anywhere from 10,000 to 30,000 jobs (potentially more people than Sprint even currently employs), but it would reduce the number of major competitors in the space from four to three -- dramatically reducing the industry's incentive to compete on price and service. The resulting competitive lull could derail many of the good things a resurgent T-Mobile has encouraged in the sector (like the death of long-term contracts and the return of unlimited data plans). Given the giant industry rubber stamp that is Trump FCC boss Ajit Pai, many analysts believed the administration would approve the deal anyway. Sprint and its Japanese owner Softbank have spent the better part of the year buttering up the Trump administration in preparation for regulatory approval, going so far as to custom craft some job creation bullshit synergies Donald could easily use to justify approval of the arguably-awful deal. Unfortunately for Sprint lobbyists, they may never get the chance. This week reports out of Japan indicated that Softbank Chair Masayoshi Son had walked away from the negotiations table after a dispute over who should have the most control over the freshly-merged company: "SoftBank Group plans to break off negotiations toward a merger between subsidiary Sprint and T-Mobile US amid a failure to come to terms on ownership of the combined entity, dashing the Japanese technology giant's hopes of reshaping the American wireless business. SoftBank is expected to approach T-Mobile owner Deutsche Telekom as early as Tuesday to propose ending the talks. They had reached a broad agreement to integrate T-Mobile and Sprint -- the third- and fourth-largest carriers in the U.S. -- and were ironing out such details as the ownership ratio." T-Mobile and its owner Deutsche Telekom obviously want to retain control of the brand identity of T-Mobile in the wake of the deal, since the company has been immensely successful thanks to actually listening to customers (mostly). Sprint in contrast has stumbled through the last several years loaded with debt, and hasn't been able to craft a brand identity (or a working network) that truly resonates with consumers. It's not particularly surprising that T-Mobile and cheeky CEO John Legere want more control over the merged company than Sprint and Softbank may be willing to give. The problem for Sprint at this point is that the only thing holding up the company's stock price for most of this year has been merger rumor and speculation. As such, some Wall Street analysts think Sprint might want need to go private if it's to survive fallout from the deal's collapse, while other analysts say failure to finalize the deal could erode up to $50 billion in theoretical value between the two companies: "(I)f these management teams fail to get this deal across the goal line, they have failed to do their job,” New Street wrote. “They will be walking away from close to $50 billion in value. Regardless of what either side things their asset is worth on its own, adding $50 billion to that starting value would be a big enough increase in value that they ought to have found a way to get the deal done.” A scuttled deal would be good news for T-Mobile's Legere, who might find synchronizing his consumer-friendly brand with the competition-killing deal a tall order. That said, it remains entirely possible that Sprint's leaked decision to walk away from the negotiations table is a bluff. Since Sprint needs the deal much more than T-Mobile does -- it's more than possible the two sides will still find a way to get the deal done. Should that occur, we can look forward to a winter filled with entirely bogus "synergy" promises as investors wait to see just how big of a mindless rubber stamp the Trump administration truly is. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
Not really sure why we're putting the Department of Homeland Security in charge of securing anything. Between fiscal years 2014 and 2016, the Department of Homeland Security personnel lost a total of 2,142 highly sensitive assets — 228 firearms; 1,889 badges; and 25 secure immigration stamps. That's from the latest Inspector General's report [PDF] on DHS components' ability to secure items that might wreak havoc -- ranging from inappropriate access to multiple deaths -- if left improperly secured. This includes current presidential faves CBP and ICE -- both DHS components. The bad news is it's good news: Although this represents a slight improvement from our last audit, more than half of the lost items we reviewed (65 of 115) revealed that component personnel did not follow policy or used poor judgment when safeguarding these assets. The IG should probably not expect more year-to-year improvements, no matter how slight. In these cases, components did not always hold personnel accountable nor did they receive remedial training for failing to safeguard these sensitive assets. And I'm not kidding about loose components causing death. The report points to a 2015 robbery where something ICE didn't secure led to exactly this. Even with new controls designed to strengthen the security of sensitive assets, lost or stolen Federal firearms continue to be used to commit serious crimes. For instance, a media article reported a September 2015 robbery in which an attacker killed a man with an Immigration and Customs Enforcement (ICE) firearm that was stolen from an unattended vehicle. The ICE agent failed to properly secure the weapon inside the vehicle in a high crime area. ICE in particular seems particularly careless with firearms. Two off duty ICE officers left their firearms unsecured and unattended in backpacks while on a beach in Puerto Rico. When the officers returned the bags were gone. An ICE officer left his firearm, badge, and credential unsecured in his hotel room while on vacation. As he slept, his overnight guest stole his belongings. But take heart, those of you concerned about the border being overrun by non-US citizens. The CBP is just as terrible. A Customs and Border Protection (CBP) officer left his backpack containing his wallet and government badge in an unlocked public gym locker. When he returned, his belongings were gone. A CBP officer left his firearm in a bag at a friend’s house. When he returned 2 days later, the gun could not be located. A CBP officer left his firearm and other law enforcement equipment in an unlocked vehicle overnight. The following day he realized his firearm and two magazines were no longer in the vehicle. Actually, it appears CBP has been asking ICE to hold its beer, in terms of responsible weapons handling. At a CBP regional armory, 208 firearms could not be physically located. The property custodian researched the situation, and approximately 2 weeks later provided documentation of the actual physical locations for each firearm, which included various lockers and storage vaults across CBP’s field offices. [...] At a CBP office, the property custodian was unable to immediately locate firearms from the inventory. After searching the facility, the property custodian discovered the firearms in a random file cabinet, stored haphazardly in boxes. Yes, one CBP office was utilizing a gun filing system (using an actual file cabinet) that resembled just one of several horrifying finds in an episode of Hoarders. Worse than the DHS's gun handling was its badge handling. Nearly 2,000 badges were unaccounted for, which means any number of people could be roaming around impersonating government agents. A couple of badge flashes from a 100% legitimate badge (in terms of origin, not current carrier) can help the holder obtain access to off-limits areas and/or personal identifying information on citizens/non-citizens, and otherwise abuse a borrowed position of power. On top of the incalculable costs, there's the tax dollars involved in replacing them at $40-75 a pop. Things won't improve if the DHS doesn't start taking this more seriously. More than half of the cases reviewed by the IG ended with nothing more than a letter of reprimand… at the most. In 22 of the 65 cases reviewed, no disciplinary action at all was taken. And these disciplinary actions will need to be preceded by rigid, standardized policies on handling of sensitive items. In addition, the DHS will actually need to establish a credible tracking system. The items tallied by the IG may only be the tip of the iceberg. As it notes in its report, it was unable to obtain enough documentation on nearly one-quarter of the items reviewed to determine whether the losses were due to careless handling. These are the people securing our borders and playing an integral part in our national security directives. And yet they're leaving guns in unattended backpacks and leaving badges behind in restaurants and amusement parks. And the DHS doesn't consider this to be enough of a problem to handle with meaningful punishments or consistent policies and reporting. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
There are really two themes when it comes to DRM, software supposedly created to stop video game piracy. The first and most notorious theme is what an utter failure DRM has been in accomplishing this core mission. Even once-vaunted DRM platforms like Denuvo have been reduced to code-bloat within the games they're meant to protect. And that's the DRM on the effective end of the spectrum, relatively speaking. But the other theme, one that is arguably far more important and impactful, is how absolutely great DRM software tends to be at annoying customers and prohibiting them from enjoying the games they legitimately purchased. This theme presents itself in multiple forms, from people being flatout unable to use the software they purchased at all, to performance hits due to the DRM software slowing down the customer's computers, to opening up grand new security holes through which malicious actors happily dive into the lives of those very same customers. The track record for DRM, in other words, is almost laughably bad. That AAA publishers haven't acknowledged this reality and still use various forms of DRM is an absurdity. But what Ubisoft did in reacting to the demise of Denuvo, essentially to double up on DRM, is backfiring in predictably frustrating ways. Ubisoft, being Ubisoft, included Denuvo's DRM for Assasin's Creed Origins. But with all the news for Denuvo being bad, the company knew the game would be cracked in hours or days using Denuvo. So, instead of simply removing the customer-annoying DRM, Ubisoft decided to add another layer of DRM on top of it, in the form of VMProtect. According to Voksi, whose ‘Revolt’ team cracked Wolfenstein II: The New Colossus before its commercial release last week, it’s none of these. The entire problem is directly connected to desperate anti-piracy measures. As widely reported (1,2), the infamous Denuvo anti-piracy technology has been taking a beating lately. Cracking groups are dismantling it in a matter of days, sometimes just hours, making the protection almost pointless. For Assassin’s Creed Origins, however, Ubisoft decided to double up, Voksi says. “Basically, Ubisoft have implemented VMProtect on top of Denuvo, tanking the game’s performance by 30-40%, demanding that people have a more expensive CPU to play the game properly, only because of the DRM. It’s anti-consumer and a disgusting move,” he told TorrentFreak. If the VMProtect name sounds familiar, that's because it was the company that actually accused Denuvo of using its software in its product without properly licensing it. And if layering DRMs on top of one another and expecting it not to have a negative effect on legit customers sounds like the product of insanity, that's because it is. Basically, unless you're running an upper end processor, the game is likely to be unplayable. “What is the normal CPU usage for this game?” a user asked on Steam forums. “I randomly get between 60% to 90% and I’m wondering if this is too high or not.” The individual reported running an i7 processor, which is no slouch. However, for those running a CPU with less oomph, matters are even worse. Another gamer, running an i5, reported a 100% load on all four cores of his processor, even when lower graphics settings were selected in an effort to free up resources. “It really doesn’t seem to matter what kind of GPU you are using,” another complained. “The performance issues most people here are complaining about are tied to CPU getting maxed out 100 percent at all times. This results in FPS [frames per second] drops and stutter. As far as I know there is no workaround.” Well, gentle Steam user, there is a workaround, but it mostly involves buying games from a company that is more interested in providing a great gaming experience to its actual customers than attempting to stamp out game piracy when doing so has proven the most futile task in the industry. If even lowering the graphics settings doesn't keep the game from stuttering noticeably, it won't be long before the refund requests start pouring in. Especially when this decision to layer DRMs like sweatshirts causes customer machines to overheat. The situation is reportedly so bad that some users are getting the dreaded BSOD (blue screen of death) due to their machines overheating after just an hour or two’s play. It remains unclear whether these crashes are indeed due to the VMProtect/Denuvo combination but the perception is that these anti-piracy measures are at the root of users’ CPU utilization problems. Ubisoft is always going to Ubisoft, I suppose. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
A third Appeals Court has ruled on the tactics the FBI used to track down users of a dark web child porn site. And the third one to rule -- the First Circuit Appeals Court -- continues the government's shut out of suppression orders at the appellate level. In the two previous cases to reach this level (Tenth and Eighth), the judges found the FBI's Network Investigative Technique to be a search under the Fourth Amendment. This wasn't much of an issue because the FBI had a warrant. The real issue was the warrant's reach: it was issued in Virginia but the NIT found a home in computers all over the US, not to mention the rest of the world. The lower courts' decisions ordering suppression of evidence for the use of an invalid warrant have all been rejected by US appeals courts. Good faith has been granted to the agent securing the warrant, thus preventing suppression of evidence. In one case, the court even conjectured the deterrent effect of evidence suppression made little sense now that the FBI has statutory permission to ignore jurisdictional limitations when seeking warrants. The First Circuit Appeals Court's decision [PDF] is no different than those preceding it. The previously-granted suppression is reversed and the FBI awarded good faith for its warrant application, which clearly told the Virginia magistrate judge the agency intended to violate the warrant's jurisdictional limits. This decision, however, limits its discussion to the good faith exception and the judges refuse to draw possibly precedential conclusions about the magistrate judge's legal authority to grant a "search anywhere" warrant. The "search anywhere" part of the warrant the lower court found invalid is all academic at this point. Rule 41 jurisdictional limits have been lifted. But that did not happen until after this warrant was procured and deployed. Like the Eighth Circuit before it, the First Circuit decides this after-the-fact rule change somewhat negates the deterrent effect of suppression. The First Circuit says good faith prevails, as the warrant was more or less explicit in its intentions and still managed to be signed by a judge. In fact, the court praises the FBI for applying for a warrant it likely knew violated pre-rule change jurisdiction limitations. We are unpersuaded by Levin's argument that because, at least according to him, the government was not sure whether the NIT warrant could validly issue under Rule 41, there is government conduct here to deter. Faced with the novel question of whether an NIT warrant can issue -- for which there was no precedent on point -- the government turned to the courts for guidance. The government presented the magistrate judge with a request for a warrant, containing a detailed affidavit from an experienced officer, describing in detail its investigation, including how the NIT works, which places were to be searched, and which information was to be seized. We see no benefit in deterring such conduct -- if anything, such conduct should be encouraged, because it leaves it to the courts to resolve novel legal issues. I guess the court would prefer to tangle with legal issues it hasn't seen before. This would be one of them -- at least in terms of thousands of searches performed with a single warrant from a seized child porn server located in Virginia. The legal issues may be novel but the end result is more of the same: good faith exception granted and the admission of evidence questionably obtained. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
Over the summer, we discussed how laughably bad Russia's efforts at blocking so-called "piracy sites" has been. In the course of four years of attempting to stamp out copyright infringement in the country, the Russian government managed to block 4,000 sites it intended to target as piracy sites, and 41,000 sites it had not intended to target that were caught up as collateral damage. Those are the kind of numbers that would make a cluster bomb blush. Even so, you might have imagined that this heavy-handed iron-fist routine must surely have had some reduction effect on the rate of piracy in Russia. The short answer to that is: nooooooope. Instead, over the course of the past few years, the market for pirated video content in Russia has doubled. According to new research published by Group-IB and reported by Izvestia, Internet pirates have been adapting to their new reality, finding new and stable ways of doing business while growing their turnover. In fact, according to the ‘Economics of Pirate Sites Report 2016’, they’ve been so successful that the market for Internet pirate video more than doubled in value during 2016, reaching a peak of 3.3 billion rubles ($57.2m) versus just 1.5 billion rubles ($26m) in 2015. Overall Internet piracy in 2016 was valued at a billion rubles more ($74.5m), Group-IB notes. So what's going on here? Well, the Russian government is learning the invaluable lesson that the internet is built to route around this kind of censorship. That old adage aside, what's actually occurring is the start of an arms-race between website operators in Russia and the government agencies dedicated to stopping them. And the government is losing. Badly. Overall, it’s estimated that the average pirate video site makes around $156,000 per year via advertising, subscriptions, or via voluntary donations. They’re creative with their money channels too. According to Maxim Ryabyko, Director General of Association for the Protection of Copyright on the Internet (AZAPO), sites use middle-men for dealing with both advertisers and payment processors, which enables operators to remain anonymous. This sort of shell game being employed by possibly truly pirate-y websites is the same one played by all kinds of websites looking to survive attempts at censorship. Where we might decry a site doing this to offer up video content that infringes copyright, we would applaud its use if the site were advocating for free speech, fair and open elections, etc. In other words, it's the censorship that is bad, not necessarily the actions of those routing around it. And, more to the point, it doesn't work. In the face of these damning numbers, the Russian government has two options: give up or censor even harder. The latter will, naturally, result in even more of the collateral damage that has already been inflicted. Still, it seems the more likely scenario. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
Earlier this year, the Trump administration and GOP handed a giant gift to the nation's telecom duopolies when they dismantled FCC broadband privacy protections. While ISPs whined incessantly about the rules, the protections were relatively modest -- simply requiring that large ISPs be transparent about what personal data is being collected and sold, who it's being sold to, and that working opt out tools be provided to consumers. The FCC's rules were only created after Verizon was caught modifying packets to covertly track users around the internet and AT&T tried to make consumer privacy a luxury add on. But in the wake of the GOP's myopic dismantling of the rules, more than 30 states began considering their own disparate privacy protections for consumers. The EFF threw its support behind one such bill in California, arguing that it could provide a good template for other states to follow in order to gain some uniformity. But Google, Comcast, AT&T and Verizon collectively lobbied to scuttle that law last month, leaked documents showing how they lied to California voters and lawmakers in claiming that the rules would have emboldened extremists, boosted annoying popups, and somehow harmed consumers. On the heels of that victory, Verizon is now lobbying the FCC to ban states from trying to protect consumer privacy. FCC Commissioner Mike O'Rielly had already hinted at this path in recent speeches to industry-backed think tanks, but what this effort would look like isn't yet clear. In a recent letter and white paper submitted to the FCC (pdf), Verizon urges the FCC to use its authority to block these state laws, and warned of the perils of states trying to actually protect consumers from unchecked broadband duopolists: "Allowing every State and locality to chart its own course for regulating broadband is a recipe for disaster. It would impose localized and likely inconsistent burdens on an inherently interstate service, would drive up costs, and would frustrate federal efforts to encourage investment and deployment by restoring the free market that long characterized Internet access service." There's a few things Verizon's ignoring. One, states wouldn't be rushing to create a patchwork quilt of consumer protections if Verizon lobbyists hadn't successfully convinced former Verizon lawyer turned FCC boss Ajit Pai to kill existing, modest federal protections. This is entirely a problem of ISP lobbyists' making. It's also worth noting that ISPs like Verizon have spent decades writing and buying protectionist, competition-killing state laws in order to protect their regional broadband mono/duopolies. When folks have pointed out that maybe giant ISPs shouldn't be writing shitty state law, ISPs (and the lawmakers paid to love them) have cried about the trampling of "states rights." Yet when those same states actually try to do something good for the end user, trampling those same rights appears to be a non-issue. That's an obvious double standard by any measure. Further on in the white paper Verizon makes it clear that it's also worried that states will rush to protect net neutrality after the FCC votes to kill existing net neutrality rules later this year: "States and localities have given strong indications that they are prepared to take a similar approach to net neutrality laws if they are dissatisfied with the result of the Restoring Internet Freedom proceeding. Notably, the New York State Attorney General claims that “the role of the states in protecting consumers and competition on the Internet remains critical and necessary.” Yes, the absolute unbridled horror of states protecting consumers and small businesses after the federal government has become a glorified rubber stamp for broadband duopolies! Again -- if Verizon doesn't want states creating broadband-focused consumer protections, it should stop trying to dismantle every federal consumer protection in existence. That includes the extremely popular (and again, relatively modest by international standards) net neutrality protections currently on the books. Verizon believes it should be completely free of anything even vaguely resembling oversight as it shifts its focus, rather clumsily, toward being a Millennial advertising engine. But while Verizon has argued for years it can self-regulate without adequate oversight, the lack of competition in most Verizon markets highlights how that's simply not practical. From the company's covert tracking of users using "zombie cookies," to its ongoing efforts to sell your personal data without informing you or letting you opt out, Verizon continues to make it perfectly clear that privacy and transparency are a distant afterthought. That leaves us with two choices: improving market competition to increase organic pressure until Verizon behaves, or leaning on some fairly basic regulatory oversight to ensure consumer privacy is protected by some basic rules of the road. Verizon would obviously prefer it if the country did neither, and so far we seem more than happy to accommodate. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
Do the police in Fairfax County, Virginia really not know about the 1st Amendment? It certainly appears that way after watching the video of them violently arresting a reporter named Mike Stark, who was trying to cover the gubernatorial campaign of Ed Gillespie. Now, because some people will want to mention this, I'll note that the following is (a) true and (b) makes no difference at all to this story: Stark works for a highly partisan website that is strongly opposed to Gillespie. But the points here would be identical if it were a reporter at the other end of the partisan divide following the opposing candidate. The positions of the reporter (or the candidate) are meaningless to the basic question of why the fuck was Mike Stark thrown to the ground, piled on by cops and arrested. And "fuck" seems to be the key word here. The background is that Stark appeared to be filming Gillespie's bus, and a police office told him to "get out of the road" (from the video it's a little unclear, but it really looks like Stark was standing in what appears to be a driveway, not a road). Either way, he backs up a bit and argues a bit with the cop, most of which is impossible to hear. But you can make out him saying "I'm a fucking reporter doing my job." At that point, another cop says "If you curse again, you're going to go to jail." To which, Stark responds in the most responsible manner possible: "Fuck this." At that point, the one officer points to him and says "Go to jail" and the other moves him up against a fence. The officers appear to have some trouble getting Stark's hands behind his back, though it does not appear due to Stark resisting, just police officers who don't appear to be very good at their job. So they just swipe his legs out from under him, throw him to the ground (hitting his head on the pavement) and then a bunch of other officers run over and they all just pile on Stark, who repeatedly says he'll give them his hands if they just get off him so he can move the arm out. Eventually, the cop cites Fairfax County Ordinance 511 which does (amazingly) say that "If any person profanely curse or swear or be drunk in public he shall be deemed guilty of a Class 4 misdemeanor." So that law is on the books -- but it's bullshit. There is no way that such a law is even remotely compatible with the First Amendment. And, of course, when actually charged, Section 511 was nowhere to be found. Instead, the cops charged him with the favorite of police who have arrested someone for no cause: "disorderly conduct" and "resisting arrest." This is... bad. It's a clear First Amendment violation and an attack on a reporter. Others who have been arrested (sometimes on similar charges) for filming in public, have sometimes been successful filing civil rights lawsuits against the cops. On a separate, but related note, it appears the cops did not realize they were being filmed until towards the end of the video where one of the cops walks over and angrily says to the person holding the camera: "I'd appreciate it if you didn't film us. Really would. Ok? This job's hard enough. Honestly? It's hard enough." Yeah, must be real hard when you get to body slam a reporter for daring to say the word "fuck" and then have to answer to public scrutiny for your thuggish violation of his rights. Real tough job. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
Pay what you want for the Adobe CC mastery bundle and you get 2 courses introducing you to Adobe After Effects and Adobe Bridge. You’ll learn the basics of After Effects CC, from importing assets, to animating effects, to ultimately exporting a final project, and you’ll go through the very basics of Adobe Bridge, like how to find certain files, filtering and previewing images. If you beat the average price, you'll unlock 7 more courses about InDesign, Dreamweaver, Illustrator and Photoshop. You'll be a master of the Adobe CC Suite when you've finished all 9 courses. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 19 days ago on techdirt
A reaction to the (non-physical) "explosion of social media in our society" has prompted an Florida legislator to make a questionable law even worse. Florida already has a law on the books making it a second-degree felony to threaten to kill or harm someone via electronic communications. That's apparently not good enough for state Rep. Stan McClain (whose "explosion" statement is quoted above). He has introduced an amendment to the law that would eliminate the language requiring targeted communications. McClain’s bill would outlaw “written threats to kill or do bodily injury to another person that are publicly posted online, even if not specifically sent to or received by the person who is the subject of the threat...” You can see immediately where the problem lies: this bill has the potential to criminalize protected speech, not to mention cause harm to people who express themselves terribly and in an unfocused manner. State Rep. Julio Gonzalez argued the bill would criminalize stupidity -- a tempting prospect to be sure, but all but guaranteed to result in First Amendment violations. McClain wants to fix what he views as a loophole in the state's existing online threats law. [A] recent state appellate decision highlighted the problem of prosecuting such cases when threats are posted on social media, as opposed to being sent by email, and are not necessarily aimed at one person. “A juvenile’s conviction … was overturned, although the juvenile had posted multiple threats of school violence on Twitter, because the threats were not directly sent to or received by any of the threatened students or school officials,” a staff analysis explained. This isn't a bug. It's a feature. Online speech should be difficult to prosecute, just like offline speech is. There's a fine line to be tread when prosecuting apparent threats. Rewording the state law this way will only lead to state-ordained punishment of protected speech. McClain is still trying to fine tune his bill, but it has already been passed out of committee and is the on the road to becoming viable legislation. The language treats any threat that can be viewed by anyone else as a criminal act, even if viewers aren't targeted. McClain says the targeted criminal activity is the posting of threatening messages. He claims prosecutors won't stack charges based on how many times the untargeted threat was viewed. That's nice of him to say before the fact, but the reality is Florida residents won't know how the law will be enforced until someone starts enforcing it. Permalink | Comments | Email This Story

Read More...