posted 2 days ago on re/code
Reed Hastings is one of many high-powered hosts of the Buttigieg fundraising event scheduled on Monday. | Ernesto S. Ruscio/Getty Images/Netflix The families of Eric Schmidt, Reed Hastings, and Sergey Brin are hosting a fundraiser for Buttigieg on Monday. Representatives from Silicon Valley’s wealthiest families are raising money for Pete Buttigieg’s presidential campaign in a show of Silicon Valley force not yet seen in the primary campaign so far, Recode has learned. A host list circulated to prospective donors for an event on Monday morning in Palo Alto, California, features individuals with family ties to some of the most prominent people in Big Tech. Netflix CEO and co-founder Reed Hastings is listed as a co-host of the event, as is Nicole Shanahan, the wife of Google co-founder Sergey Brin; Wendy Schmidt, the wife of former Google CEO Eric Schmidt; and Michelle Sandberg, the sister of Facebook COO Sheryl Sandberg, sources say. The inclusion of these people on the list says nothing definitive about who Sergey Brin, Sheryl Sandberg, or Eric Schmidt themselves will support in the 2020 race, of course. But the event’s host list is a reminder of Buttigieg’s ties to Silicon Valley, which are increasingly becoming front-and-center in the presidential campaign thanks to Elizabeth Warren, who is raising questions about Buttigieg’s relationships with major contributors. At a time when Big Tech and elite donors are on the ropes in Democratic politics, Buttigieg is embracing both more than his rivals. How voters respond will be an indication of how much they care about candidates’ connections to Silicon Valley titans. Buttigieg has been making inroads with tech donors throughout 2019. During his last trip to Silicon Valley in September, the Democratic candidate quietly had a private sit-down at Emerson Collective with billionaire Laurene Powell Jobs, Recode has learned. Powell Jobs has met with other candidates as well. A Buttigieg spokesperson said in a statement: “We are proud to have the support of more than 700,000 grassroots donors across the country who are helping power this campaign. The only thing people are promised at an event with Pete is that he will use that money to beat Donald Trump.” The Palo Alto event is one of four Buttigieg fundraisers being hosted in the Bay Area beginning on Sunday evening. In Napa Valley, Buttigieg will be hosted by Katie Hall, an advisor to ultra-high-net-worth clients, for “An Evening in the Vineyards with Mayor Pete,” according to an invitation seen by Recode. In Woodside, Buttigieg will be hosted by Justin Rosenstein, the co-founder of Asana. This is notable because Rosenstein’s co-founder, Dustin Moskovitz, is one of the Democratic Party’s biggest mega-donors, though he is not expected to weigh in on the presidential primary. To close out the trip in San Francisco, Buttigieg will be hosted by art gallery owner Jeffrey Fraenkel and Sabrina Buell, who belongs to a family famous for its political fundraising. In a sign of Buttigieg’s appeal, that event — which has only one asking price, the maximum individual contribution limit of $2,800 — is sold out, a rarity in presidential fundraising. But it is the Palo Alto event that is likely to turn heads. The Brin, Schmidt, Hastings, and Sandberg families have a combined net worth of about $80 billion, according to estimates. These co-hosts are promised an “intimate meeting with Mayor Pete” at the coffee fundraiser in exchange for donating $2,800 apiece to his campaign, according to the invitation. Hastings had been scheduled to host Buttigieg on a prior trip that was later canceled. The Netflix CEO is more politically active than other tech billionaires and has sunk millions of dollars into advocating for charter schools in California. Shanahan, Brin’s wife, is a person being watched closely in the world of Silicon Valley philanthropy and politics. The couple got married earlier this year, and Shanahan has embarked on an ambitious effort to research reproductive aging by dedicating $100 million to a new group called the Bia Echo Foundation. Schmidt’s wife, Wendy, has long been focused on the Schmidt Family Foundation, the philanthropic group backed by Eric Schmidt’s money from Google that is focused on efforts like ocean conservation and international leadership development. Sandberg’s sister, Michelle, and her husband, Marc Bodnick, have backed Buttigieg for a while and have become top fundraisers for his campaign. Their family members are being more circumspect. Brin and Sheryl Sandberg have not made any endorsements in the 2020 race, though Sandberg has long been one of Silicon Valley’s most prolific donors. Eric Schmidt, a power broker in Democratic politics during the Obama years, has also raised money for Joe Biden. But at a moment when Buttigieg is coming under criticism from Warren, any ties to Big Tech will only accentuate her argument about his relationships to Silicon Valley. Warren, for her part, is doing no official fundraising events and has said she is returning contributions over $200 from Big Tech executives — though her rebuffs don’t seem to diminish her appeal for many of tech’s elite figures. “When a candidate brags about how beholden he feels to a group of wealthy investors, our democracy is in serious trouble,” Warren said of Buttigieg in a major speech on Thursday. Warren has also succeeded in pressuring Buttigieg into committing to release the names of his fundraisers and to open up his events to the media. This quartet of fundraisers in Silicon Valley will be some of the first events to allow media access as part of an effort to show that what happens behind closed doors isn’t as mysterious as you’d think.

Read More...
posted 2 days ago on re/code
Twitter’s logo displayed outside the New York Stock Exchange on the day of its IPO in 2013. | Andrew Burton/Getty Images Twitter’s “inferred interests” tell you — and advertisers — what you’re supposedly into. Vox Slack on a Friday is probably like a lot of office Slacks on Fridays: The chatter there can get a bit … unproductive. And one Friday a while back, it entailed a discussion of what Twitter thinks we’re into. A colleague had stumbled upon Twitter’s list of her “inferred interests” — basically, the things it believes she likes and who she is. Twitter describes her as an “affluent baby boomer” and “corporate mom” with multiple kids. (She’s a 27-year-old single woman without children.) It lists dozens and dozens of car-related interests. (She doesn’t have a car — or even a driver’s license.) She commented that though internet companies seemingly track her every move, Twitter, at least in her case, has a “hilariously misguided sense of who I am.” Her discovery, naturally, sent a lot of other people — including me — to check out what Twitter thinks they’re into. My inferred interests weren’t so off-base. Twitter knows I’m a millennial, though it thinks I make more money than I do and have somehow managed to buy a home in New York City. It knows I like The Bachelor. But it also thinks I’m into stamps and coins, which, what? As for Recode co-founder and prolific tweeter Kara Swisher, Twitter lists among her inferred interests “Maggie Haberman” and “Men’s Pants.” Charles Pertwee/Getty Images for Barclays Asia Trophy Former Arsenal soccer player Ian Wright tweets about his food in Singapore on May 13, 2015. Twitter has been letting users get a look at what it thinks they’re into since 2017, when it rolled out a series of privacy updates, including some improvements to transparency. Users can see what Twitter thinks they’re interested in as well as what Twitter’s partners — i.e., advertisers — think they like. Seeing what Twitter thinks you like can be a fun activity — but it can also be an odd experience to see what the company infers about you from your online moves. The psychology around targeted advertising is complex. On the one hand, if we have to see ads, it’s probably better that they’re in line with our interests. On the other, knowing how much advertisers know can feel a bit, well, creepy. And what can be an even weirder experience is when we see an ad that doesn’t feel quite right but that isn’t unfathomably wrong, either — like a man in his 20s suddenly getting ads for hair loss products, or a woman in her 30s seeing ads to freeze her eggs. “Our brain is able to process things that are relevant to us,” Saleem Alhabash, a professor at Michigan State University and co-director of its Media and Advertising Psychology Lab, said. “But what happens when the ads are suggesting things that are not relevant but that are slightly plausible?” How to figure out what Twitter thinks your interests are To find out what Twitter thinks about you, go to “Settings and privacy” > “Your Twitter data” > “Interest and ads data.” There, you can see your “inferred interests from Twitter” — the interests Twitter has matched to you based on your profile and activity — and your “inferred interests from partners,” or what Twitter’s ad partners think about your hobbies, income, shopping interests, etc. That’s based on information collected from Twitter, both online and off. The ad partners basically build “audiences” for advertisers to help them reach customers. The example Twitter gives on its website is that a pet food company might use an audience to find dog owners to try to sell them dog food. Twitter’s ad partners have 15 interests for me. They think really think I’m into juice and ice cream, which, not so much, but they’re right on mustard and non-dairy milk. They also think I’ve got a pretty sick house. As far as my Twitter interests go, it lists 190. I should probably spend less time looking up stuff on The Bachelor. It’s important to note you can opt out of getting shown interest-based ads. You can shut it off using your Twitter settings or go to the Digital Advertising Alliance’s consumer choice tool to opt out there as well. And you can deselect interests if they’re not for you. It’s weird to know what Twitter thinks you like Vox Slack chatter exemplified how thought-provoking a tool like this “inferred interests” one can be. Multiple colleagues weighed in about their own discoveries — one found that Twitter listed multiple of his interests as a series of Bens (namely, Shapiro and Sasse); another said she has more than a dozen boxes for Broad City. And some interests were oddly specific — multiple boxes for Michael Cohen saying President Trump used racist language, or for a Rolling Stone article about Johnny Depp. It’s not clear what advertisers would do with information that granular, but it could be important for the Twitter algorithm for surfacing tweets. We don’t really know exactly how the algorithms that try to figure out our interests work. Companies gather a ton of data about us all the time, and how they interpret and use that data isn’t entirely clear. “It’s a big mystery box,” Alhabash told me. And the endeavor, as Twitter’s “inferred interests” shows, isn’t always a fruitful one: It gets some things right, but it gets a lot of things wrong. Glenn Chapman/AFP/ Getty Images Employees walk past a lighted Twitter logo as they leave the company’s headquarters in San Francisco on August 13, 2019. People don’t necessarily mind ad targeting, but they do get kind of freaked out when it gets too creepy. Harvard Business School research published in 2018 found that transparency around their ad targeting can be good for platforms such as Twitter, but users become more wary when they think it goes too far. Wired wrote up the research last year: The researchers say their findings mimic social truths in the real world. Tracking users across websites is viewed as an inappropriate flow of information, like talking behind a friend’s back. Similarly, making inferences is often seen as unacceptable, even if you’re drawing a conclusion the other person would freely disclose. For example, you might tell a friend that you’re trying to lose weight, but find it inappropriate for him to ask if you want to shed some pounds. The same sort of rules apply to the online world, according to the study. And it’s not just when ads are right that they make us nervous; it’s also when they’re wrong, or at least we perceive them as such. As Alhabash pointed out, being shown an irrelevant ad can sometimes be as thought-inducing as being shown one that’s relevant, especially when it’s ad targeting as personalized as much of what we experience online. He called those ads “selectively irrelevant.” They’re ads that aren’t applicable now but could be in the future, or ads that make you wonder whether tech platforms or advertisers know something about you that you don’t. Is your hairline about to start receding? Should you talk to your doctor about freezing your eggs? This is a tool that retailers have long used. Target in 2012, for example, raised eyebrows when the New York Times published a story about it sending pregnancy-related offers to a teenage girl before her family knew she was pregnant. But thanks to the power of the internet and tech conglomerates such as Google and Facebook, companies now have a lot more data about us than before. Facebook’s practices for gathering information about users has come under fierce scrutiny in recent years, and both Facebook and Google know more about us than we’d probably like to think. CNBC last year laid out what sort of data Facebook tracks and how it does it: By now you’ve probably gathered that Facebook uses things like your interest, age and other demographic and geographic information to help advertisers reach you. Then there’s the stuff your friends do and like — the idea being that it’s a good indicator for what you might do and like. So, if you have a friend who has liked the New Yorker’s Facebook page, you might see ads for the magazine on your Facebook feed. But that’s just the tip of the iceberg. Facebook and advertisers can also infer stuff about you based on things you share willingly. For example, Facebook categorizes users into an “ethnic affinity” based on what it thinks might be their ethnicity or ethnic influence. It might guess this through TV shows or music you’ve liked. Often, Facebook is wrong — and while it’s possible to remove it, you can’t change it. There is also no “ethnic affinity” option for whites. The fact that the platforms sometimes get the targeting data wrong probably doesn’t please advertisers. In Twitter’s case, that so many of the inferred interests were wrong could explain some of its problems in monetizing its business. Business issues aside, seeing what Twitter knows you like — or thinks you like — can have some awkward implications that we still don’t completely understand. “[Researchers] are trying to understand, how does the notion of relevance make people feel? How does it make them feel about themselves, about the advertisers and the product?” Alhabash said. Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 2 days ago on re/code
Christina Animashaun/Vox How flawed genetic testing could be used for more than screwing up your race. Three years ago, I put my faith in a 23andMe DNA test and got burned. While most of my results initially checked out — about 50 percent South Asian and what looked like a 50 percent hodgepodge of European — there was one glaring surprise. Where roughly 25 percent Italian was supposed to be, Middle Eastern stood in its place. The results shocked me. Over the years, I had made a lot of the Italian portion of my heritage; I had learned the language, majored in Latin in college, and lived in Rome, Italy, for my semester abroad. Still, as a rational person, I believed the science. But my grandmother, whose parents moved from Sicily to Brooklyn, where she was born and grew up speaking Italian, refused to accept the findings. Fast forward to this summer, when I got an email about new DNA relations on 23andMe and revisited my updated genetic results, only to find out that I am, in fact, about a quarter Italian (and generally southern European). But it was too late to tell my grandma. She’s dead now and I’m a liar. This sort of thing happens a lot because ancestry DNA testing — and genetic testing in general — is an inexact science that’s prone to errors throughout almost every step of the process. As my Vox colleague Brian Resnick has explained, some small amount of error is unavoidable within the technical portion of analyzing your DNA. Making the results of these tests even more unreliable is the fact that their whole ancestry component is based on self-reported surveys from people who say they belong to one ancestry or another — an inherently flawed practice. Sample sizes vary by location and by testing company, so there’s a big disparity in data quality, especially if you happen to not be white. That’s because Europeans are much more represented in DNA databases and therefore, much more exact information can be gleaned about their DNA. Courtesy of Rani Molla The writer’s grandma Jo in the hospital in 2017. Courtesy of Rani Molla The writer (left), her sister, and a dog. Of course, what would be much more troubling than getting someone’s heritage or hair color wrong is using that information to inform decisions made about that person. And as more people submit their DNA to genetic testing companies, and more law enforcement and government agencies figure out ways to use this deeply personal genetic information, it could be used against us. Making matters more concerning is that there are very few legal safeguards on what companies and governments can and can’t do with data gleaned from direct-to-consumer genetic tests. “Under existing law it would be legal to very broadly share consumer information if you disclose that that was happening in the privacy policy and terms of service with the customer,” James Hazel, a research fellow at Vanderbilt University Medical Center, who has done research on genetics test privacy policies, told Recode. And companies don’t have to stick with existing privacy policies, either. “Nearly every company reserves the right to change their privacy policies at any time.” Of course, few people read privacy policies in the first place (under 10 percent always do so, according to a new Pew Research study). And the existing privacy policies for genetic testing aren’t necessarily clear or forthcoming. Hazel found that 39 percent of the 90 genetics testing companies he researched had “no readily accessible policy applicable to genetic data on their website.” Hazel says some of the biggest genetics testing companies, like 23andMe and Ancestry, have signed on to a list of best practices, a policy framework created by the Future Privacy Forum, which includes both consumer and industry advocacy groups. The practices include agreements to be transparent around data collection, to take strong security measures, and to use valid legal processes when working with law enforcement. While signing a pledge with these well-intentioned ideas is comforting, they’re ultimately vague and not legally mandated. Failing to live up to these tenets is a PR flub, rather than a legal burden. He also warned that while large companies might be motivated by public opinion, consumer feedback, and media scrutiny, smaller companies tend to be overlooked and left to do what they want, under the radar. “Just like the industry is very diverse in terms of tests offered, also the information and the quality of the privacy policies are all over the map,” he told Recode. What genetic testing is already — and could someday be — used for Law enforcement has long used DNA testing in police investigations, but these consumer tests give authorities an exponentially bigger potential pool — more than 26 million people have taken at-home ancestry tests. These tests compromise the genetic privacy not just of people who choose to take the tests, but also their distant relatives who haven’t consented to anything. In one recent high-profile case, authorities were able to track down the Golden State serial killer after four decades by using DNA from his third cousin and fourth cousins, who had voluntarily uploaded their DNA test results to GEDMatch, a public site where people go to find long-lost relatives — and a resource that police rely on to help investigate crimes. This year, GEDMatch changed its settings so that users have to opt in to law enforcement searches, which has shrunk the available database from over a million to just 180,000 profiles. It’s notable that DNA testing accuracy varies a lot by application, with finding a DNA relative being a lot more reliable than determining ancestry, and loads more accurate than, say, finding your ideal diet for your DNA. Authorities can, in some cases, go directly to the DNA testing sites to access people’s genetic information. Earlier this year, BuzzFeed News reported that FamilyTreeDNA, one of the biggest direct-to-consumer testing sites, was working directly with the FBI to browse their database for matches — and relatives of matches — of people suspected of violent crimes. The report got FamilyTreeDNA kicked off the list of the aforementioned best practices supporters. Both 23andMe and Ancestry say they don’t willingly share information with law enforcement, unless compelled by a valid legal process like a court order. A 23andMe spokesperson added, “We use all legal measures to challenge any and all requests in order to protect our customer’s privacy. To date, we have successfully challenged these requests and have not released any information to law enforcement.” Beyond policing, it’s possible DNA test results could be used against you or your relatives in other ways. The Genetic Information Nondiscrimination Act prevents health care companies and employers from using genetic data to deny you employment or coverage. The intention is to prevent employers and insurance companies from denying coverage or discriminating against people based on, say, their having a cancer-correlated genetic variant. But companies with fewer than 15 people are exempt from this rule, as are life insurance, disability insurance, and long-term care insurance companies — all of which can request genetic testing as part of their application process. And in other countries without laws protecting citizens from genetic discrimination, the stakes are even higher. China is using DNA samples — as well as genetic research from a Yale geneticist — to track and oppress Uighurs, a mostly Muslim ethnic group that the country’s government has forced into “reeducation” camps. Anton Novoderezhkin/TASS via Getty Images Reagents for forensic DNA fingerprinting and relationship testing produced by Nearmedic Pharma in Obninsk, Russia, on October 28, 2018. Consumer genetics testing companies also sell your data to third parties like pharmaceutical companies, making what ultimately happens with this sensitive information more difficult for consumers to track. They also make genetic data available to academic researchers in human biology who use it for legitimate studies. And companies are popping up every day, promising to use your DNA for everything from figuring out what wine or marijuana varietals your genetics predispose you to, to what skin care regimen is best for you, according to Jennifer King, director of consumer privacy at Stanford Law School’s Center for Internet and Society. “The science across all that is probably total junk,” she told Recode. Still the most troubling potential consequences of imperfect genetic testing and a lack of regulation on how this data can be used may not have even happened yet — or we may just not yet be aware of them. An FBI agent who works on biological countermeasures, Edward You, thinks hacking genetic data could be a national cybersecurity threat that makes the US vulnerable to biological attacks. “When you make the decision to give away your DNA data, that choice affects you and everybody related to you. It’s not necessarily where it goes right now, but where it goes in the future.” Advertising is also a natural, though troubling, future use case for your genetic data. “23andMe could decide that they want to use genetic data for ad targeting. They could potentially give a list of customers to Johnson & Johnson,” King told Recode. “It would be a change, but they could do it.” More likely, these companies could sell advertisers access to you on their website. So, allowing advertisers to place ads in front of certain demographics when they visit their DNA results, but not telling advertisers which individuals they’re reaching. “They could decide, ‘Hey we’re gonna follow the Google or Facebook model and allow advertisers to target customers through our platform,’” King said. 23andMe doesn’t currently allow companies to advertise to 23andMe customers, nor do they allow advertising on the 23andMe website. As to what the future holds, a spokesperson said, “We can only comment on what we’re doing today. However, before making any changes to how a customer’s data is being used or shared, we ask that customer for their explicit consent.” Without that approval, the spokesperson said, nothing will change in how a person’s info is shared. The larger point is that giving access to our DNA data now might have larger consequences than we realize when we first decided to spit in a tube and find out if we’re really a quarter Italian. “When you make the decision to give away your DNA data, that choice affects you and everybody related to you,” King said. “It’s not necessarily where it goes right now, but where it goes in the future.” What’s next At the federal level, there’s limited regulation overseeing how companies can share consumer DNA test data at the federal level, but some states have put forth various bills on the matter. The Federal Trade Commission can step in, and has done so for especially egregious cases when companies run afoul of their own privacy policies. But it’s most likely that legislation will come in the form of data privacy laws more generally, Hazel said. “Rather than genetic privacy specific legislation, I think we will see data privacy legislation that has an impact on companies that offer these services,” he said. Internationally, the European Union’s General Data Protection Regulation (GDPR) explicitly classifies genetic data as a special category of personal data, meaning it has enhanced protections over regular personal data. Currently in the US, competing Republican and Democratic data privacy bills are circulating in the Senate, though either will need elusive bipartisan support to become law. It’s also unclear how these would deal with genetic privacy. “There appears to be a growing push for federal data privacy regulation given the challenges created by a non-uniform system in which various states each enact their own laws with varying requirements,” Hazel said. For now, consumers can, of course, choose not to take consumer DNA tests. Or, King suggests, they can take the tests under a fake name, review them, then ask the testing company to delete their account. Consumers can also take a long hard look at the privacy policies they’re not reading. For those who already have taken the tests, there’s the option to delete your profile and take the results with a grain of salt. As for me, it’s too late to apologize to my grandma for believing in a flawed genetics report rather than her. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 3 days ago on re/code
Bonobos co-founder Andy Dunn at Code Commerce 2017 in New York City. | Keith MacDonald Andy Dunn, who sold Bonobos to Walmart in 2017, is leaving the giant retailer. Andy Dunn, the founder of menswear brand Bonobos, is leaving Walmart two-plus years after selling his company to the retail giant for $310 million, according to a source. Dunn was most recently overseeing what was, at one point, supposed to be a key component of Walmart’s digital reinvention: a collection of digital-native brands like Bonobos and Eloquii that were meant to connect with a different demographic of shoppers and provide Walmart with more merchandise that customers couldn’t find on competing sites like Amazon. Walmart has, for the last few years, been trying to overhaul its digital efforts to close the gap in online sales between itself and Amazon, which is more than seven times the size of Walmart in US online retail sales. But Walmart has scaled back some plans of its e-commerce chief Marc Lore, who, at one point earlier in his tenure, suggested that he might purchase a new digital-native brand every month. Walmart has put the brakes on those acquisitions, as the company’s US e-commerce losses are projected to surpass $1 billion this year, Recode previously reported. As a result, Dunn’s purview has not grown as much as once imagined. Walmart sold off one of its other digital-native brands, Modcloth, in a fire sale earlier this year and had considered offers for Bonobos. Bonobos also laid off dozens of employees earlier this year. In an internal memo announcing Dunn’s departure, the company said: “After more than two years innovating new, incubated brands and bringing on important acquired brands, as an entrepreneur at heart, Andy Dunn has decided now is the right time to take the next steps in his career. During the last two and a half years, Andy’s contributions to the organization have been invaluable. He’s been instrumental in building out and growing Walmart’s proprietary brand portfolio. The DNA of the incubated and acquired brands is now a key part of our strategy, and provides us a brand engine we can plug directly into the enterprise.” Besides Bonobos and Eloquii, the women’s plus-sized clothing brand Walmart bought last year, Dunn oversaw the launch and growth of Allswell, an online and in-store mattress brand that Walmart unveiled in 2018. Lore said at Recode’s Code Commerce event in September that Walmart would focus more on incubating new online businesses inside of Walmart like Allswell that could have mass appeal, instead of acquiring more companies. Walmart, the world’s largest brick-and-mortar retailer, purchased Bonobos for $310 million in 2017. Under Lore’s digital leadership, Walmart was trying to build out a portfolio of digital-first fashion brands that would attract a generation of younger shoppers who increasingly rely on Amazon for more of their shopping needs. Lore joined Walmart in 2016 when it acquired his startup Jet.com for $3 billion. Recode reported this summer that Bonobos and Eloquii were still unprofitable, a fact not popular with Walmart corporate leadership, who are used to printing profits. Besides an online presence, both brands also operate physical stores. Still, the company is continuing to invest in some big e-commerce initiatives. In September, Walmart announced it was launching a $98-a-year grocery delivery service whose product selection it hopes to broaden over time to better compete with the giant selection offered by Amazon’s Prime delivery membership program. Earlier this year, Walmart also acquired rights to relaunch the once-trendy fashion brand Scoop NYC, but at lower price points. The brand is being sold online and in a small selection of Walmart stores.

Read More...
posted 3 days ago on re/code
Zac Freeland/Vox Demographics will determine who gets hit worst by automation. Policy will help curb the damage.  The robots will someday take our jobs. But not all our jobs, and we don’t really know how many. Nor do we understand which jobs will be eliminated and which will be transitioned into what some say will be better, less tedious work. What we do know is that automation and artificial intelligence will affect Americans unevenly, according to data from McKinsey and the 2016 US Census that was analyzed by the Brookings think tank. Young people — especially those in rural areas or who are underrepresented minorities — will have a greater likelihood of having their jobs replaced by automation. Meanwhile, older, more educated white people living in big cities are more likely to maintain their coveted positions, either because their jobs are irreplaceable or because they’re needed in new jobs alongside our robot overlords. The Brookings study also warns that automation will exacerbate existing social inequalities along certain geographic and demographic lines, because it will likely eliminate many lower- and middle-skill jobs considered stepping stones to more advanced careers. These job losses will be in concentrated in rural areas, particularly the swath of America between the coasts. However, at least in the case of gender, it’s the men, for once, who will be getting the short end of the stick. Jobs traditionally held by men have a higher “average automation potential” than those held by women, meaning that a greater share of those tasks could be automated with current technology, according to Brookings. That’s because the occupations men are more likely to hold tend to be more manual and more easily replaced by machines and artificial intelligence. Wojtek Laski/Getty Images Automated robotic arms assemble parts of a Volkswagen Crafter van in the newly opened Volkswagen AG manufacturing plant in Wrzesnia, Poland, on December 12, 2016. Of course, the real point here is that people of all stripes face employment disruption as new technologies are able to do many of our tasks faster, more efficiently, and more precisely than mere mortals. The implications of this unemployment upheaval are far-reaching and raise many questions: How will people transition to the jobs of the future? What will those jobs be? Is it possible to mitigate the polarizing effects automation will have on our already-stratified society of haves and have-nots? A recent McKinsey report estimated that by 2030, up to one-third of work activities could be displaced by automation, meaning a large portion of the populace will have to make changes in how they work and support themselves. “This anger we see among many people across our country feeling like they’re being left behind from the American dream, this report highlights that many of these same people are in the crosshairs of the impact of automation,” said Alastair Fitzpayne, executive director of the Future of Work Initiative at the Aspen Institute. “Without policy intervention, the problems we see in our economy in terms of wage stagnation, labor participation, alarming levels of growth in low-wage jobs — those problems are likely to get worse, not better,” Fitzpayne told Recode. “Tech has a history that isn’t only negative if you look over the last 150 years. It can improve economic growth, it can create new jobs, it can boost people’s incomes, but you have to make sure the mechanisms are in place for that growth to be inclusive.” Before we look at potential solutions, here are six charts that break down which groups are going to be affected most by the oncoming automation — and which have a better chance of surviving the robot apocalypse: Occupation The type of job you have affects your likelihood of being replaced by a machine. Jobs that require precision and repetition — food prep and manufacturing, for example — can be automated much more easily. Jobs that require creativity and critical thinking, like analysts and teachers, can’t as easily be recreated by machines. You can drill down further into which jobs fall under each job type here. if("undefined"==typeof window.datawrapper)window.datawrapper={};window.datawrapper["nPO06"]={},window.datawrapper["nPO06"].iframe=document.getElementById("datawrapper-chart-nPO06"),window.addEventListener("message",function(a){if("undefined"!=typeof a.data["datawrapper-height"])for(var b in a.data["datawrapper-height"])if("nPO06"==b)window.datawrapper["nPO06"].iframe.style.height=a.data["datawrapper-height"][b]+"px"}); Education People’s level of education greatly affects the types of work they are eligible for, so education and occupation are closely linked. Less education will more likely land a person in a more automatable job, while more education means more job options. if("undefined"==typeof window.datawrapper)window.datawrapper={};window.datawrapper["e8xWP"]={},window.datawrapper["e8xWP"].iframe=document.getElementById("datawrapper-chart-e8xWP"),window.addEventListener("message",function(a){if("undefined"!=typeof a.data["datawrapper-height"])for(var b in a.data["datawrapper-height"])if("e8xWP"==b)window.datawrapper["e8xWP"].iframe.style.height=a.data["datawrapper-height"][b]+"px"}); Age Younger people are less likely to have attained higher degrees than older people; they’re also more likely to be in entry-level jobs that don’t require as much variation or decision-making as they might have later in life. Therefore, young people are more likely to be employed in occupations that are at risk of automation. if("undefined"==typeof window.datawrapper)window.datawrapper={};window.datawrapper["oZtQU"]={},window.datawrapper["oZtQU"].iframe=document.getElementById("datawrapper-chart-oZtQU"),window.addEventListener("message",function(a){if("undefined"!=typeof a.data["datawrapper-height"])for(var b in a.data["datawrapper-height"])if("oZtQU"==b)window.datawrapper["oZtQU"].iframe.style.height=a.data["datawrapper-height"][b]+"px"}); Race The robot revolution will also increase racial inequality, as underrepresented minorities are more likely to hold jobs with tasks that could be automated — like food service, office administration, and agriculture. !function(){"use strict";window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"])for(var e in a.data["datawrapper-height"]){var t=document.getElementById("datawrapper-chart-"+e)||document.querySelector("iframe[src*='"+e+"']");t&&(t.style.height=a.data["datawrapper-height"][e]+"px")}});window.addEventListener('DOMContentLoaded',function(){var i=document.createElement("iframe");var e=document.getElementById("datawrapper-gsbJ4");var t=e.dataset.iframeTitle||'Interactive graphic';i.setAttribute("src",e.dataset.iframe);i.setAttribute("title",t);i.setAttribute("frameborder","0");i.setAttribute("scrolling","no");i.setAttribute("aria-label",e.dataset.iframeFallbackAlt||t);i.setAttribute("title",t);i.setAttribute("height","400");i.setAttribute("id","datawrapper-chart-gsbJ4");i.style.minWidth="100%";i.style.border="none";e.appendChild(i)})}() Gender Men, who have always been more likely to have better jobs and pay than women, also might be the first to have their jobs usurped. That’s because men tend to over-index in production, transportation, and construction jobs — all occupational groups that have tasks with above-average automation exposure. Women, meanwhile, are overrepresented in occupations related to human interaction, like health care and education — jobs that largely require human labor. Women are also now more likely to attain higher education degrees than men, meaning their jobs could be somewhat safer from being usurped by automation. if("undefined"==typeof window.datawrapper)window.datawrapper={};window.datawrapper["iWXNg"]={},window.datawrapper["iWXNg"].iframe=document.getElementById("datawrapper-chart-iWXNg"),window.addEventListener("message",function(a){if("undefined"!=typeof a.data["datawrapper-height"])for(var b in a.data["datawrapper-height"])if("iWXNg"==b)window.datawrapper["iWXNg"].iframe.style.height=a.data["datawrapper-height"][b]+"px"}); Location Heartland states and rural areas — places that have large shares of employment in routine-intensive occupations like those found in the manufacturing, transportation, and extraction industries — contain a disproportionate share of occupations whose tasks are highly automatable. Small metro areas are also highly susceptible to job automation, though places with universities tend to be an exception. Cities — especially ones that are tech-focused and contain a highly educated populace, like New York; San Jose, California; and Chapel Hill, North Carolina — have the lowest automation potential of all. See how your county could fare on the map below — the darker purple colors represent higher potential for automation: if("undefined"==typeof window.datawrapper)window.datawrapper={};window.datawrapper["NyZpV"]={},window.datawrapper["NyZpV"].iframe=document.getElementById("datawrapper-chart-NyZpV"),window.addEventListener("message",function(a){if("undefined"!=typeof a.data["datawrapper-height"])for(var b in a.data["datawrapper-height"])if("NyZpV"==b)window.datawrapper["NyZpV"].iframe.style.height=a.data["datawrapper-height"][b]+"px"}); Note that in none of the charts above are the percentages of tasks that could be automated very small — in most cases, the Brookings study estimates, at least 20 percent of any given demographic will see changes to their tasks due to automation. Of course, none of this means the end of work for any one group, but rather a transition in the way we work that won’t be felt equally. “The fact that some of the groups struggling most now are among the groups that may face most challenges is a sobering thought,” said Mark Muro, a senior fellow at Brookings’s Metropolitan Policy Program. In the worst-case scenario, automation will cause unemployment in the US to soar and exacerbate existing social divides. Depending on the estimate, anywhere from 3 million to 80 million people in the US could lose their jobs, so the implications could be dire. “The Mad Max thing is possible, maybe not here but the impact on developing countries could be a lot worse as there was less stability to begin with,” said Martin Ford, author of Rise of the Robots and Architects of Intelligence. “Ultimately, it depends on the choices we make, what we do, how we can adapt.” Fortunately, there are a number of potential solutions. The Brookings study and others lay out ways to mitigate job loss, and maybe even make the jobs of the future better and more attainable. The hardest part will be getting the government and private sector to agree on and pay for them. The Brookings policy recommendations include: Create a universal adjustment benefit to laid-off workers. This involves offering career counseling, education, and training in new, relevant skills, and giving displaced workers financial support while they work on getting a new job. But as we know from the first manufacturing revolution, it’s difficult if not impossible to get government and corporations on board with aiding and reeducating displaced low-skilled workers. Indeed, many cities across the Rust Belt have yet to recover from the automation of car and steel plants in the last century. Government universal adjustment programs, which vary in cost based on their size and scope, provide a template but have had their own failings. Some suggest a carbon tax could be a way to create billions of dollars in revenue for universal benefits or even universal basic income. Additionally, taxing income as opposed to labor — which could become scarcer with automation — provides other ways to fund universal benefits. Maintain a full-employment economy. A focus on creating new jobs through subsidized employment programs will help create jobs for all who want them. Being employed will cushion some of the blow associated with transitioning jobs. Progressive Democrats’ proposed Green New Deal, which would create jobs geared toward lessening the United States’ dependence on fossil fuels, could be one way of getting to full employment. Brookings also recommends a federal monetary policy that prioritizes full employment over fighting inflation — a feasible action, but one that would require a meaningful change to the fed’s longstanding priorities. Introduce portable benefits programs. This would give workers access to traditional employment benefits like health care, regardless of if or where they’re employed. If people are taken care of in the meantime, some of the stress of transitioning to new jobs would be lessened. These benefits also allow the possibility of part-time jobs or gig work — something that has lately become more of a necessity for many Americans. Currently, half of Americans get their health care through their jobs, and doctors and politicians have historically fought against government-run systems. The concept of portable benefits has recently been popular among freelance unions as well as among contract workers employed in gig economy jobs like Uber. Pay special attention to communities that are hardest hit. As we know from the charts above, some parts of America will have it way worse than others. But there are already a number of programs in place that provide regional protection for at-risk communities that could be expanded upon to deal with disruption from automation. The Department of Defense already does this on a smaller scale, with programs to help communities adjust after base closures or other program cancellations. Automation aid efforts would provide a variety of support, including grants and project management, as well as funding to convert facilities into new uses. Additionally, “Opportunity Zones” in the tax code — popular among the tech set — give companies tax breaks for investing in low-income areas. These investments in turn create jobs and stimulate spending in areas where it’s most needed. Increased investment in AI, automation, and related technology. This may seem counterintuitive, seeing as automation is causing many of these problems in the first place, but Brookings believes that embracing this new tech — not erecting barriers to the inevitable — will generate the economic productivity needed to increase both standards of living and jobs outside of those that will be automated. “We are not vilifying these technologies; we are calling attention to positive side effects,” Brookings’s Muro said. “These technologies will be integral in boosting American productivity, which is dragging.” None of these solutions, of course, is a silver bullet, but in conjunction, they could help mitigate some of the pain Americans will face from increased automation — if we act soon. Additionally, many of these ideas currently seem rather progressive, so they could be difficult to implement in a Republican-led government. “I’m a long-run optimist. I think we will work it out. We have to — we have no choice,” Ford told Recode, emphasizing that humanity also stands to gain huge benefits from using AI and robotics to solve our biggest problems, like climate change and disease. “The short term, though, could be tough — I worry about our ability to react in that time frame,” Ford said, especially given the current political climate. “But there comes a point when the cost of not adapting exceeds the cost of change.” Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 3 days ago on re/code
Another day, another WeWork related casualty. This time: Spacious, a restaurant-based coworking startup that WeWork recently acquired, announced it’s shutting down. | Spacious The famously struggling coworking giant bought Spacious in August, and now it’s closing its doors. WeWork is closing its restaurant coworking startup Spacious, after acquiring the company only around four months prior. It’s the latest sign that the troubled company is cutting costs by offloading the plethora of business it rapidly acquired in the past few years. Making matters more complicated, the US Securities and Exchange Commission (SEC), is investigating WeWork’s original acquisition of Spacious. The SEC launched a complaint after a whistleblower alleged that executives pushed the deal through without doing enough outside review of Spacious’s financials, a claim that WeWork has denied. According to the complaint, WeWork bought the company for $42.5 million. Spacious is just one of several startups WeWork has acquired in the past few years that are being shut down, going through layoffs, or on the market. Executives at Conductor, an SEO and content marketing startup that WeWork acquired, are buying back the company from their troubled owner. And Managed by Q, which sells technology to help companies manage workplace tasks and services, is reportedly up for sale. Meetup, the popular social network for people to get together over shared interests, is going through restructuring and layoffs and is also reportedly up for sale. “As part of WeWork’s renewed focus on its core workspace business, Spacious will close its doors on December 31, 2019. We regret any disruption that this may cause to you or your business,” reads an email sent to Spacious customers on Thursday. According to Business Insider, which first reported the shutdown, Spacious’s entire staff of 50 employees was laid off. A spokesperson for WeWork shared a statement confirming the shutdown along with the following: “To minimize any disruption, Spacious members will receive prorated refunds as well as discounts on select WeWork memberships in order to maintain access to flexible workspaces and a global community. The Spacious team will receive severance and other forms of assistance to aid in their career transitions.” Spacious, founded in 2016 by Chris Smothers and Preston Pesek, built its business renting out tables at vacant restaurant space in the day time to people looking for coworking space. In July, the startup shut down its San Francisco business locations. At the time, the company said it was in part due to regulatory hurdles. Around a month later, WeWork acquired the company. For Spacious customers, those with recurring subscriptions and recently purchased day passes will get discounts on renting desks at WeWork in the future if they choose. Add Spacious to the growing list of collateral in WeWork’s cinematic implosion.

Read More...
posted 3 days ago on re/code
Christina Animashaun/Vox AI is being used to attract applicants and to predict a candidate’s fit for a position. But is it up to the task? With parents using artificial intelligence to scan prospective babysitters’ social media and an endless slew of articles explaining how your résumé can “beat the bots,” you might be wondering whether a robot will be offering you your next job. We’re not there yet, but recruiters are increasingly using AI to make the first round of cuts and to determine whether a job posting is even advertised to you. Often trained on data collected about previous or similar applicants, these tools can cut down on the effort recruiters need to expend in order to make a hire. Last year, 67 percent of hiring managers and recruiters surveyed by LinkedIn said AI was saving them time. But critics argue that such systems can introduce bias, lack accountability and transparency, and aren’t guaranteed to be accurate. Take, for instance, the Utah-based company HireVue, which sells a job interview video platform that can use artificial intelligence to assess candidates and, it claims, predict their likelihood to succeed in a position. The company says it uses on-staff psychologists to help develop customized assessment algorithms that reflect the ideal traits for a particular role a client (usually a company) hopes to hire for, like a sales representative or computer engineer. Smith Collection/Gado/Getty Images Output of an Artificial Intelligence system from Google Vision, performing facial recognition on a photograph of a man in San Ramon, California on November 22, 2019. That algorithm is then used to analyze how individual candidates answer preselected questions in a recorded video interview, grading their verbal responses and, in some cases, facial movements. HireVue claims the tool — which is used by about 100 clients, including Hilton and Unilever — is more predictive of job performance than human interviewers conducting the same structured interviews. But last month, lawyers at the Electronic Privacy Information Center (EPIC), a privacy rights nonprofit, filed a complaint with the Federal Trade Commission, pushing the agency to investigate the company for potential bias, inaccuracy, and lack of transparency. It also accused HireVue of engaging in “deceptive trade practices” because the company claims it doesn’t use facial recognition. (EPIC argues HireVue’s facial analysis qualifies as facial recognition.) The lawsuit follows the introduction of the Algorithmic Accountability Act in Congress earlier this year, which would grant the FTC authority to create regulations to check so-called “automated decision systems” for bias. Meanwhile, the Equal Opportunity Employment Commission (EEOC) — the federal agency that deals with employment discrimination — is reportedly now investigating at least two discrimination cases involving job decision algorithms, according to Bloomberg Law. AI can pop up throughout the recruitment and hiring process Recruiters can make use of artificial intelligence throughout the hiring process, from advertising and attracting potential applicants to predicting candidates’ job performance. “Just like with the rest of the world’s digital advertisement, AI is helping target who sees what job descriptions [and] who sees what job marketing,” explains Aaron Rieke, a managing director at Upturn, a DC-based nonprofit digital technology research group. And it’s not just a few outlier companies, like HireVue, that use predictive AI. Vox’s own HR staff use LinkedIn Recruiter, a popular tool that uses artificial intelligence to rank candidates. Similarly, the jobs platform ZipRecruiter uses AI to match candidates with nearby jobs that are potentially good fits, based on the traits the applicants have shared with the platform — like their listed skills, experience, and location — and previous interactions between similar candidates and prospective employers. For instance, because I applied for a few San Francisco-based tutoring gigs on ZipRecruiter last year, I’ve continued to receive emails from the platform advertising similar jobs in the area. Overall, the company says its AI has trained on more than 1.5 billion employer-candidate interactions. Platforms like Arya — which says it’s been used by Home Depot and Dyson — go even further, using machine learning to find candidates based on data that might be available on a company’s internal database, public job boards, social platforms like Facebook and LinkedIn, and other profiles available on the open web, like those on professional membership sites. Arya claims it’s even able to predict whether an employee is likely to leave their old job and take a new one, based on the data it collects about a candidate, such as their promotions, movement between previous roles and industries, and the predicted fit of a new position, as well as data about the role and industry more broadly. Another use of AI is to screen through application materials, like résumés and assessments, in order to recommend which candidates recruiters should contact first. Somen Mondal, the CEO and co-founder of one such screening and matching service, Ideal, says these systems do more than automatically search résumés for relevant keywords. For instance, Ideal can learn to understand and compare experiences across candidates’ résumés and then rank the applicants by how closely they match an opening. “It’s almost like a recruiter Googling a company [listed on an application] and learning about it,” explains Mondal, who says his platform is used to screen 5 million candidates a month. But AI doesn’t just operate behind the scenes. If you’ve ever applied for a job and then been engaged by a text conversation, there’s a chance you’re talking to a recruitment bot. Chatbots that use natural-language understanding created by companies like Mya can help automate the process of reaching out to previous applicants about a new opening at a company, or finding out whether an applicant meets a position’s basic requirements — like availability — thus eliminating the need for human phone-screening interviews. Mya, for instance, can reach out over text and email, as well as through messaging applications like Facebook and WhatsApp. Another burgeoning use of artificial intelligence in job selection is talent and personality assessments. One company championing this application is Pymetrics, which sells neuroscience computer games for candidates to play (one such game involves hitting the spacebar whenever a red circle, but not a green circle, flashes on the screen). These games are meant to predict candidates’ “cognitive and personality traits.” Pymetrics says on its website that the system studies “millions of data points” collected from the games to match applicants to jobs judged to be a good fit, based on Pymetrics’ predictive algorithms. Proponents say AI systems are faster and can consider information human recruiters can’t calculate quickly These tools help HR departments move more quickly through large pools of applicants and ultimately make it cheaper to hire. Proponents say they can be more fair and more thorough than overworked human recruiters skimming through hundreds of résumés and cover letters. “Companies just can’t get through the applications. And if they do, they’re spending — on average — three seconds,” Mondal says. “There’s a whole problem with efficiency.” He argues that using an AI system can ensure that every résumé, at the very least, is screened. After all, one job posting might attract thousands of applications, with a huge share from people who are completely unqualified for a role. Such tools can automatically recognize traits in the application materials from previous successful hires and look for signs of that trait among materials submitted by new applicants. Mondal says systems like Ideal can consider between 16 and 25 factors (or elements) in each application, pointing out that, unlike humans, it can calculate something like commute distance in “milliseconds.” “You can start to fine-tune the system with not just the people you’ve brought in to interview, or not just the people that you’ve hired, but who ended up doing well in the position. So it’s a complete loop,” Mondal explains. “As a human, it’s very difficult to look at all that data across the lifecycle of an applicant. And [with AI] this is being done in seconds.” These systems typically operate on a scale greater than a human recruiter. For instance, HireVue claims the artificial intelligence used in its video platform evaluates “tens of thousands of factors.” Even if companies are using the same AI-based hiring tool, they’re likely using a system that’s optimized to their own hiring preferences. Plus, an algorithm is likely changing if it’s continuously being trained on new data. Another service, Humantic, claims it can get a sense of candidates’ psychology based on their résumés, LinkedIn profiles, and other text-based data an applicant might volunteer to submit, by mining through and studying their use of language (the product is inspired by the field of psycholinguistics). The idea is to eliminate the need for additional personality assessments. “We try to recycle the information that’s already there,” explains Amarpreet Kalkat, the company’s co-founder. He says the service is used by more than 100 companies. Proponents of these recruiting tools also claim that artificial intelligence can be used to avoid human biases, like an unconscious preference for graduates of a particular university, or a bias against women or a racial minority. (But AI often amplifies bias; more on that later.) They argue that AI can help strip out — or abstract — information related to a candidate’s identity, like their name, age, gender, or school, and more fairly consider applicants. The idea that AI might clamp down on — or at least do better than — biased humans inspired California lawmakers earlier this year to introduce a bill urging fellow policymakers to explore the use of new technology, including “artificial intelligence and algorithm-based technologies,” to “reduce bias and discrimination in hiring.” AI tools reflect who builds and trains them These AI systems are only as good as the data they’re trained on and the humans that build them. If a résumé-screening machine learning tool is trained on historical data, such as résumés collected from a company’s previously hired candidates, the system will inherit both the conscious and unconscious preferences of the hiring managers who made those selections. That approach could help find stellar, highly qualified candidates. But Rieke warns that method can also pick up “silly patterns that are nonetheless real and prominent in a data set.” One such résumé-screening tool identified being named Jared and having played lacrosse in high school as the best predictors of job performance, as Quartz reported. If you’re a former high school lacrosse player named Jared, that particular tool might not sound so bad. But systems can also learn to be racist, sexist, ageist, and biased in other nefarious ways. For instance, Reuters reported last year that Amazon had created a recruitment algorithm that unintentionally tended to favor male applicants over female applicants for certain positions. The system was trained on a decade of résumés submitted to the company, which Reuters reported were mostly from men. Manjunath Kiran/AFP via Getty Images A visitor at Intel’s Artificial Intelligence (AI) Day walks past a signboard in Bangalore, India on April 4, 2017. (An Amazon spokesperson told Recode that the system was never used and was abandoned for several reasons, including because the algorithms were primitive and that the models randomly returned unqualified candidates.) Mondal says there is no way to use these systems without regular, extensive auditing. That’s because, even if you explicitly instruct a machine learning tool not to discriminate against women, it might inadvertently learn to discriminate against other proxies associated with being female, like having graduated from a women’s college. “You have to have a way to make sure that you aren’t picking people who are grouped in a specific way and that you’re only hiring those types of people,” he says. Ensuring that these systems are not introducing unjust bias means frequently checking that new hires don’t disproportionately represent one demographic group. But there’s skepticism that efforts to “de-bias” algorithms and AI are a complete solution. And Upturn’s report on equity and hiring algorithms notes that “[de-biasing] best practices have yet to crystallize [and] [m]any techniques maintain a narrow focus on individual protected characteristics like gender or race, and rarely address intersectional concerns, where multiple protected traits produce compounding disparate effects.” And if a job is advertised on an online platform like Facebook, it’s possible you won’t even see a posting because of biases produced by that platform’s algorithms. There’s also concern that systems like HireVue’s could inherently be built to discriminate against people with certain disabilities. Critics are also skeptical of whether these tools do what they say, especially when they make broad claims about a candidates’ “predicted” psychology, emotion, and suitability for a position. Adina Sterling, an organizational behavior professor at Stanford, also notes that, if not designed carefully, an algorithm could drive its preferences toward a single type of candidate. Such a system might miss a more unconventional applicant who could nevertheless excel, like an actor applying for a job in sales. “Algorithms are good for economies of scale. They’re not good for nuance,” she explains, adding that she doesn’t believe companies are being vigilant enough when studying the recruitment AI tools they use and checking what these systems actually optimize for. Who regulates these tools? Employment lawyer Mark Girouard says AI and algorithmic selection systems fall under the Uniform Guidelines on Employee Selection Procedures, guidance established in 1978 by federal agencies that guide companies’ selection standards and employment assessments. Many of these AI tools say they follow the four-fifths rule, a statistical “rule of thumb” benchmark established under those employee selection guidelines. The rule is used to compare the selection rate of applicant demographic groups and investigate whether selection criteria might have had an adverse impact on a protected minority group. But experts have noted that the rule is just one test, and Rieke emphasizes that passing the test doesn’t imply these AI tools do what they claim. A system that picked candidates randomly could pass the test, he says. Girouard explains that as long as a tool does not have a disparate impact on race or gender, there’s no law on the federal level that requires that such AI tools work as intended. In its case against HireVue, EPIC argues that the company has failed to meet established AI transparency guidelines, including artificial intelligence principles outlined by the Organization for Economic Co-operation and Development that have been endorsed by the U.S and 41 other countries. HireVue told Recode that it follows the standards set by the Uniform Guidelines, as well as guidelines set by other professional organizations. The company also says its systems are trained on a diverse data set and that its tools have helped its clients increase the diversity of their staff. At the state level, Illinois has made some initial headway in promoting the transparent use of these tools. In January, its Artificial Intelligence Video Interview Act will take effect, which requires that employers using artificial intelligence-based video analysis technology notify, explain, and get the consent of applicants. Still, Rieke says few companies release the methodologies used in their bias audits in “meaningful detail.” He’s not aware of any company that has released the results of an audit conducted by a third party. Meanwhile, senators have pushed the EEOC to investigate whether biased facial analysis algorithms could violate anti-discrimination laws, and experts have previously warned the agency about the risk of algorithmic bias. But the EEOC has yet to release any specific guidance regarding algorithmic decision-making or artificial intelligence-based tools and did not respond to Recode’s request for comment. Rieke did highlight one potential upside for applicants. Should lawmakers one day force companies to release the results of their AI hiring selection systems, job candidates could gain new insight into how to improve their applications. But as to whether AI will ever make the final call, Sterling says that’s a long way’s off. “Hiring is an extremely social process,” she explains. “Companies don’t want to relinquish it to tech.” Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 4 days ago on re/code
Away co-founders Steph Korey and Jen Rubio onstage at the 2019 Code Commerce event in New York City. | Keith MacDonald for Vox Media A behind-the-scenes look at how Lululemon’s Stuart Haselden replaced Away CEO Steph Korey earlier than planned. Just four days after an investigation into a “toxic company culture” at the luggage startup Away went viral, the company announced that it had hired a top executive from Lululemon to replace its embattled co-founder Steph Korey as CEO. The company told the Wall Street Journal that the CEO search had been in progress since the spring, insinuating that the fallout from the article, published by The Verge, did not play a role in Korey’s resignation. But multiple sources tell Recode that while new CEO Stuart Haselden had indeed planned to join Away before The Verge piece was published, he was not meant to immediately helm the CEO role; instead, he would join the company as Away’s chief operating officer, or COO, reporting to Korey, and would later move into the top spot if all went according to plan. Under that original plan, Haselden would eventually replace Korey as CEO — perhaps as early as mid-2020 — after he got to know the business better. It was also meant to allow Korey time to get comfortable with the transition, according to a person familiar with the plan. (Haselden was already COO of Lululemon, a public company worth $29 billion, and wouldn’t have taken the COO role at a much smaller company without the understanding that he would eventually hold the top spot.) But after the workplace culture story erupted late last week, some of Away’s investors pushed to rip the band aid off and accelerate the CEO swap. On Friday, the day after the article ran, Korey had posted an apology statement on Twitter that seemed to indicate she wasn’t going anywhere — “I know I have more work to do, and I will do better for the team.” But by Monday, she had agreed to a new plan to step aside for Haselden effectively almost immediately (he starts as CEO on January 13), giving up day-to-day control of the company while keeping a seat on the company’s board of directors with the title of executive chairman. In a statement to Recode, the company said in part: “At the beginning of the year, our co-founders recognized the need to bring in additional leaders to help manage the company and accelerate growth. Steph Korey launched a search for an executive to become COO. During the course of that process, we met a fantastic candidate in Stuart Haselden. During our months of discussions with him, it became clear his experience and skills make him uniquely qualified to be our next CEO.” The statement did not address the last minute change in plans prompted by The Verge article, and Away would not comment on it. The original investigation from The Verge painted Korey as a micro-managing chief executive who blasted employees publicly on the workplace collaboration software Slack, and who hounded overworked junior employees in the company’s customer service division. “I know this group is hungry for career development opportunities, and in an effort to support you in developing your skills, I am going to help you learn the career skill of accountability,” Korey wrote in one series of middle-of-the-night Slack messages to her customer service staff (emphasis hers). “To hold you accountable...no more [paid time off] or [work from home] requests will be considered from the 6 of you...I hope everyone in this group appreciates the thoughtfulness I’ve put into creating this career development opportunity and that you’re all excited to operate consistently with our core values.” In her apology a day after the story ran, Korey said she was “appalled and embarrassed” by the exposed Slack messages published in the piece. She said had been working with an executive coach this year, and had built out an experienced leadership team around her and her co-founder Jen Rubio. The story, and Korey’s subsequent resignation, touched off debates in the tech industry and beyond. Some wondered why an online luggage retailer seemed to espouse the kind of cutthroat company culture and demanding work-life imbalance that have historically been more common in Silicon Valley. Others questioned whether the outcome for Korey might have been different if she were a man and not a woman. Investors in Away include venture capital firms like Global Founders Capital and Forerunner Ventures, as well as the mutual fund giant Wellington Management and the hedge fund Lone Pine Capital. The fact that some investors were keen to find a replacement for Korey — the founder-CEO of one of the hottest consumer startups around — even before The Verge article ran, indicates that some of Korey’s leadership flaws were evident to the board of directors previously. But Away has still been able to raise more than $180 million in venture capital since its founding in 2015, and was valued at $1.4 billion in its most recent round of investor funding. The reason? Korey and Rubio, in a very short period of time, built a company that has sold hundreds of millions of dollars worth of hard-shell, carry-on suitcases popular with urban, millennial consumers. They’ve also been building a brand with ambitions to expand into other product categories such as apparel and consumer-packaged goods. Along the way, the two founders have also sold tens of millions of dollars combined in company stock back to one of its investors, according to a source, relinquishing part of their stakes in the startup while limiting their financial downside in the event that the company didn’t continue to find success in the future. The company allowed dozens of Away employees to also participate in the stock sale, according to a separate source. Now, it will be Haselden’s job to repair whatever fractures remain in Away’s company culture, while keeping it on the path toward a future outcome that many around the company still view as a realistic outcome: an IPO.

Read More...
posted 4 days ago on re/code
Sean Gallup/Getty Images The company is expanding its definition of harassment on the video platform, but there’s still plenty of room for debate. YouTube announced on Wednesday that it’s making long-awaited changes to its harassment policy, saying it will tighten rules around what’s considered a threat and toughen punishment for repeat offenders. For years, the video platform has faced intense scrutiny from critics, including its own employees, who say it’s allowed hate speech and harassment to flourish — particularly with content that targets racial minorities, women, LGBTQ individuals, and other historically marginalized groups. Controversy around YouTube’s policies hit a high point in June after Vox Media journalist Carlos Maza called public attention to the repeated harassment he was receiving from conservative YouTube commentator Steven Crowder. Over the course of two years, Crowder routinely used racial and homophobic slurs in his widely watched videos attempting to debunk Maza’s work. After initially saying that Crowder’s videos didn’t violate YouTube’s community guidelines, the company ended up reversing course and penalized Crowder by suspending his ability to earn ad revenue. Still, it stopped short of removing any of his videos from the platform. Amid criticism for how it handled the situation, the company promised six months ago that it would take a “hard look” at its policies. Now, we’re seeing the results of those changes, which appear to be a step in the right direction for YouTube, whose critics have long demanded that it do a better job policing harmful content. The changes announced on Wednesday are incremental and will largely depend on execution rather than policy. If, going forward, YouTube does take down more content that meets a broader definition of harassment, it will undoubtedly provoke controversy, particularly at a time when the company continues to face pressure from Republican leaders such as President Donald Trump over claims that the video platform censors conservative speech. “One of the goals is to make sure that the free speech and public debate that exists on YouTube platform is not stifled, but that it continues to exist,” Neal Mohan, YouTube’s chief product officer, told Recode about the changes. Mohan said the company is going through an “incubation process” in which the company is training thousands of raters to more accurately identify speech that constitutes harassment under the new policies. Raters are YouTube staff who help determine if content violates the company’s community guidelines. When asked specifically about the Crowder-Maza controversy, Mohan confirmed that YouTube would take down several videos posted by Crowder in which he attacks Maza. Mohan declined to comment on any other specific videos that YouTube plans to remove from its platform. In June, Vox’s editor-in-chief, Lauren Williams, and head of video, Joe Posner, wrote an open letter asking YouTube to better clarify and enforce its harassment policy. A spokesperson for Vox Media declined to comment on YouTube’s harassment policy changes and plans to take down certain Crowder videos, and referred Recode to the previous open letter. Beyond the Crowder situation, the remaining question is exactly how stringently YouTube will enforce these policies across the billions of videos that are watched on its platform every day. While these guidelines help clarify certain scenarios that constitute harassment, there’s still plenty of room for ambiguity around how these rules may or may not be applied. Broadening the definition of harassment YouTube is making three significant shifts in its content moderation policies, all of which essentially make it easier to identify and remove videos on the grounds of harassment. The first change is that it will “no longer allow content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation,” and it will apply these rules not just for private individuals, but for public figures as well. The company is also expanding its definition of threats to include “veiled or implied threats,” not just direct ones — for example, someone menacingly holding a knife in a video while talking about a person, even if they don’t actually state a threat. The next policy YouTube is updating involves repeat offenders. The company will now reserve the right to dish out “strikes” and eventually remove accounts that repeatedly upload content that may not qualify harassment in a specific instance, but that altogether demonstrate a pattern of targeting and harassment. Lastly, the company says it’s applying these rules to the comment sections of videos, and that it expects to remove more comments than it currently does (in the third quarter of 2019, YouTube removed 16 million of them). YouTube will be rolling out a tool to help video creators moderate comments by auto-flagging ones that are potentially inappropriate, and giving owners of the video the ability to review them before they’re posted. It will be making this an optional feature that’s turned on by default for YouTube’s largest channels with the most active comments sections. How YouTube made the changes In its blog post, YouTube said it consulted with video creators and organizations that focus on online bullying and journalists’ rights, as well as “free speech proponents,” and “policy organizations from all sides of the political spectrum,” in drafting this policy. Mohan declined to give specific names of organizations that helped with the process. Mohan also told Recode that YouTube also turned to members of employee resource groups (ERGs), including ones for LGBTQ and black employees. In the past, Google’s workforce has publicly critiqued how YouTube handled hate speech and harassment. After the Maza harassment controversy, in June, the company’s employees protested Google’s presence at the San Francisco Pride Parade because of the platform’s perceived lack of protection for LGBTQ individuals who were being harassed. In response to YouTube’s announcement, one former Google employee told Recode, “I do think the most important thing is not what they say — because their policies already seemed to prohibit a lot of the harassment we were complaining about — but what they do. We’ll know in a few months if these changes are in any way meaningful.”

Read More...
posted 4 days ago on re/code
A woman uses a computer to control robots at the 855,000-square-foot Amazon fulfillment center in Staten Island, New York on February 5, 2019. | Johannes Eisele/AFP/Getty Images Amazon is leading a robotics race that will have a seismic impact on the warehouse industry, which employs more than 1.1 million Americans today. When the tech industry has come up in the 2020 Democratic presidential debates, the most important discussion topic hasn’t been about breaking up the tech giants; it’s been about the automation of jobs and the massive impact this is expected to have on the US labor force. At the center of this debate is Amazon, a company that employees hundreds of thousands of employees in its massive warehouse network, which is also a company whose investment in robots and other automation technologies means it could one day be a huge job eliminator, too. In 2012, Amazon spent $775 million to purchase a young robotics company called Kiva Systems that gave it ownership over a new breed of mobile robots that could carry shelves of products from worker to worker, reading barcodes on the ground for directions along the way. But it also gave Amazon the technical foundation on which it could build new versions of warehouse robotics for years to come, setting the stage for a potential future where the only people inside Amazon’s facilities are those employed to maintain and fix their robotic replacements. Today, Amazon has more than 200,000 mobile robots working inside its warehouse network, alongside hundreds of thousands of human workers. This robot army has helped the company fulfill its ever-increasing promises of speedy deliveries to Amazon Prime customers. “They defined the expectations for the modern consumer,” said Scott Gravelle, the founder and CEO of Attabotics, a warehouse automation startup. And those expectations of fast, free delivery driven by Amazon have led to a boom in the retail warehouse industry, with entrepreneurs like Gravelle and startups like Attabotics attempting to build smarter and cheaper robotic solutions to help both traditional retailers and younger e-commerce operations keep up with a behemoth like Amazon. This robotics race — led by Amazon — will have a seismic impact on the warehouse industry, which employs more than 1.1 million Americans today. And the rise of these artificially intelligent robots means there’s likely a day coming when these warehouse robots will be capable of replacing just about every human task, and human worker. “The thing that really makes us unique as human beings is our ability to solve problems,” Martin Ford, author of The Rise of Robots, told me this summer for an episode of the Land of the Giants: The Rise of Amazon podcast. “Machine learning and related technologies are for the first time allowing machines to do that and to compete with that capability. That’s really kind of a game-changer.” In the meantime, robots have the potential to eliminate some of the most menial warehouse labor, as evidenced by the Amazon robots that now transport products across massive warehouses in place of workers who used to be forced to walk the equivalent of 10 or more miles a day. That sounds like a good thing, but new research indicates these robots may be increasing worker injury rates, even though they’re taking on some of the hard labor. Here’s a look at the good and the bad of the rise of robots inside of Amazon, and a peek ahead at where this is all headed. The good If you’ve heard stories of Amazon warehouse workers walking 10 to 20 miles a day on hard concrete floors, well, they’re true. But in newer warehouses outfitted with robots, much of that walking has been eliminated. “Walking 12 miles a day on a concrete floor to pick these orders. ... If you’re not 20 years old, you’re a broken person at the end of the week,” said Marc Wulfraat, founder and president of the supply chain consultancy MWPVL International. The type of employees that used to do the walking — some called “stowers,” others called “pickers” — now remain stationary, standing at their own work stations, with cushion pads beneath their feet, if they are working in one of the robotic warehouses. Stowers in older Amazon facilities used to walk up and down long aisles pushing a cart full of products, placing them randomly on shelves where they found space, and scanning them with a handheld device to mark their location in a system. Now Amazon robots carry empty shelving units — known as pods — to the workstations of stowers, who take products placed in front of them and fit them into open shelf space inside the shelving pods. When the pod is full, the stower presses a button that sends the robot and attached shelving unit rolling across a caged-in area of the warehouse, and eventually to the workstation of a “picker.” Like stowers, pickers in older facilities walked miles on end each day, plucking a product off a shelf, scanning it, and placing it into a cart they pushed the whole way. But, they, too, now remain standing at their own workstation in Amazon’s robotic warehouses, plucking items off of shelving units that robots carry right to them. “Having a rubber mat, where goods come to you, is three times more productive than the traditional approach and it is more humane on the people who work in these fulfillment centers,” Wulfraat said. An Amazon spokesperson said these new technologies help the company store up to 40 percent more inventory in their warehouses, and that they make employees’ jobs easier. The bad “But picking three times faster also implies more wear and tear due to repetitive motion and working faster at lifting and handling products,” Wulfraat added. So along with the drive to automate more warehouse tasks, comes much higher expectations for workers. “The robots have raised the average picker’s productivity from around 100 items per hour to what Mr. Long and others have said is a target of around 300 or 400, though the numbers vary across teams and facilities,” the New York Times reported in July. An Amazon spokesperson did not comment on the specific goals, but said the company provides coaching to those struggling to meet goals. The new targets, though, mean that workers are allowed just a handful of seconds between each product task, which can be complicated by the 8-foot-tall shelving units that the robots carry to the stations of pickers and stowers, Wulfraat said. Because of that height, each worker has a stepladder that they occasionally need to ascend to place or retrieve products from the top row of the shelving units. “Workers who stow items are supposed to keep lightweight products at the top or bottom of the pod and heavy products between the chest and the knees,” Wulfraat said. “But it’s not possible to adhere to this when the work is happening so fast and people are under the gun, so people take safety and ergonomic shortcuts out of necessity.” Johannes EiseleI/AFP/Getty Images Men work at a distribution station in the 855,000-square-foot Amazon fulfillment center in Staten Island, on February 5, 2019. Such shortcuts mean that the pickers on the receiving end have to sometimes carry heavier-than-designed items down the steps of their stepladder, resulting in a “higher probability” of injury, according to Wulfraat. A recent investigation by the Center for Investigative Reporting’s Reveal group, and published in the Atlantic, appeared to show that the rate of worker injuries at Amazon’s robotic warehouses is in fact higher than those facilities in which robots are not in use. “Of the records Reveal obtained, most of the warehouses with the highest rates of injury deployed robots,” the piece read. “After Amazon debuted the robots in Tracy, California, five years ago, the serious-injury rate there nearly quadrupled, going from 2.9 per 100 workers in 2015 to 11.3 in 2018, records show,” the piece added. The Amazon spokesperson said in an email that the health and safety of Amazon employees is a top priority, and listed several initiatives to try to back that up. She also said Amazon is more aggressive than others in the industry when it comes to documenting injuries, insinuating that’s why Amazon injury rates may be higher than industry norms. Still, experts who study the robotics industry and its impact on workers fear that the squeezing of human workers is a feature — not a bug — of this period bridging workplaces to a fully-automated future. “The kind of efficiency that Amazon has to have in order to operate the way it operates now and also to do what it wants to do in the future. ... They’ve got to get more and more efficient,” Ford, the author, said. “Now as long as people are still part of the loop, what that means is that the whole system has to effectively come under more and more algorithmic control.” He continued: “So in a sense, if you’re one of these workers in that environment, you’re truly just going to be kind of a cog in the machine. You’re gonna be sort of a plug-in neural network as a human being that is performing some tasks that right now the robots can’t.” What’s next Amazon continues to add versions of the original Kiva robots to more and more fulfillment centers. At the same time, the company works on new robotic inventions to handle new tasks inside its facilities. Reuters reported in May that Amazon was rolling out automated packing machines in some of its warehouses. The company has also started introducing new mobile robots — similar in appearance to the original Kiva ones — to shuttle packages around inside sortation centers, which are mini-warehouses where packaged customer orders are sorted by geographic destination. In these same facilities, Amazon is also experimenting with giant robotic arms that would place the ready packages onto the mobile robots. All the while, Amazon continues to chase the Holy Grail of warehouse robotics: a robot that can grasp a wide range of merchandise types — with different shapes, sizes, and form factors — with a level of dexterity that’s similar to what the human hand can do. A company spokesperson confirmed that it’s an area that Amazon is interested in and continues to research. Such an invention, though, could mean Amazon would need fewer workers working as pickers and stowers, too. In past years, Amazon has sponsored a contest where robotics teams from around the world have competed to create the best robot picker. More recently, Amazon has decided to fund research from external teams instead of hosting the event. CEO Jeff Bezos said earlier this year that the robotic grasping problem will be solved in the next decade. But some logistics and robotics experts think that certain types of Amazon merchandise could be picked by robots years sooner than that. And Amazon has indirectly hinted at this, too. This year, the company announced plans to “upskill” 100,000 of its US employees, including warehouse workers. A spokesperson said that as a top US employer, the company feels a responsibility to help employees develop new technical skills to move into better-paying jobs. In a press release, Amazon cited a “changing jobs landscape” as the impetus for the job-training push. It could have easily used the word “automation” instead. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 4 days ago on re/code
Christina Animashaun/Vox The fight for your phone is “where the game is going to be played.” If you think politicians already know too much about you — from your home address to what you like to search for on YouTube to whatever was leaked in the Cambridge Analytica scandal — just wait. Campaigns are scouting out new, intrusive sources of political data to track possible supporters that both sides agree will, at some point, become the norm in American elections going forward. Political campaigns have become contests in data collection, with both Democrats and Republicans pacing digital beaches with new and improved metal detectors meant to find the latest types of buried treasures. And in pitched presidential elections, with both sides on the hunt for ever-so-slight upper hands, data collection efforts are growing more sophisticated, disguised, and all-encompassing. Joshua Lott/Getty Images Democratic presidential candidate and former Vice President Joe Biden takes a selfie as he greets attendees during an event in Council Bluffs, Iowa, on November 30, 2019. That spells trouble for activists who worry about the reach of political candidates and of tech companies — especially when the two combine forces. If political campaigns can invade our privacy and manipulate our opinions with the data they collect, some fear it will erode our democracy. “Campaigns are collecting anything and everything that’s possible,” said Frank Ahearn, a privacy expert who advises clients on how to scrub their digital footprint. “Online data has become a new weapon.” Political data collection encompasses a few kinds of information: There is the data that you hand over to advertisers like Donald Trump’s presidential campaign when you sign a petition or open a certain email. There is the data that you hand over to your credit card company when you buy a certain brand of soap or frequent a certain fast food chain. There’s also the data you litter across the internet, dropping “cookies,” which digitally record when you visit a website and what you click on when you’re there. And then there is the next frontier in political advertising: your personal location data, collected from apps you’ve downloaded that then take this sensitive information and sell it to third parties — including political campaigns. Love it or hate it, digital strategists see this location data as part of the future of political campaigns, as candidates and advocacy groups harness your personal whereabouts and leverage it to try to win your support. One campaign might know if you’ve passed by one of their lawn signs recently. Another might track whether you’ve been in a specific Catholic church in Dubuque, Iowa. Forces behind Trump, who three years ago said he considered data to be “overrated” in politics, are exploring this next iteration of digital campaign tools. And with the incumbent president’s vastly superior resources and innate appetite for digital experimentation, many leading Democrats are concerned that it is the GOP — not the digitally pioneering party of Barack Obama — that is mastering Silicon Valley’s tricks ahead of what’s expected to be the most expensive US presidential election ever. Your phone’s location is “where the game is going to be played” Political data begins and ends at the voter file, which is a compendium of information about you that’s rooted in offline data such as your voting frequency, party registration, and what you may have told volunteers when they knocked on your door. Political operatives are always working to enrich this offline data with new, digital data about you — and to do so better and faster than their political rivals. Part of the reason that campaigns are seeking new data sources is that some of the favored sources of the past — most notably, cookies — are proving less useful. Browsers such as Safari, Chrome, and Firefox have recently made it more difficult to track specific people on desktop and mobile browsers as they cycle across the internet. These cookies used to be a major part of campaigns’ advertising strategies. And so some political operatives on both sides see mobile advertising IDs — basically, a profile of each cellphone user, based on location data, that is assigned an “ID” and is theoretically anonymous — as the next frontier in political data collection. They’re already a well-worn data set in commercial advertising, an area where some are starting to voice privacy concerns. “Getting device IDs is where the game is going to be played,” said Keegan Goudiss, who served as the Bernie Sanders campaign’s head of digital advertising in 2016. Here’s how it works: Certain apps you’ve downloaded on your phone and given permission to collect your location data then sell that data to brokers, who then sell that information to bidders like political campaigns. Campaigns can — with the help of third parties — then match specific mobile advertising IDs to specific voters and harness that data to present these people with the optimal campaign messaging. Justin Sullivan/Getty Images Phone location data is one of the data sources that political operatives have been using to gain an edge on their political rivals. Experts in the field have ideas about how the GOP or Democrats might use this kind of data. One Democratic strategist imagined a scenario in which the GOP could use people’s phone IDs to construct mass datasets — for example, one that contains information about every churchgoer in America. Then Republicans could match that data with voting history and come up with a list of churchgoers who infrequently show up on Election Day. The GOP could then send its volunteers to knock on specific doors in specific neighborhoods, for instance, and try to get those voters to the polls. Cyrus Krohn, who oversaw digital strategy for the Republican National Committee in 2008, floated another possibility: What if Republicans used phone locations to track which routine patrons of Chick-fil-A have stopped visiting the chain after the company announced it would no longer support anti-LGBTQ organizations? Republicans could then target these disaffected social conservatives with messaging meant to reenergize them. This is all no longer theoretical: The Trump campaign earlier this year changed its privacy policy to alert voters that it might use beacons, or transmitters that use Bluetooth to track you and your phone’s proximity to a specific location. The campaign has never actually ever used beacons, a campaign source said. A pro-Trump super PAC, Committee to Defend the President, has reportedly gone ahead and employed beacons to capture mobile IDs. Democrats say that they, too, have experimented with collecting location-based data in the past, but on a smaller scale. And strategists think it’s only a matter of time before this becomes de rigueur in presidential politics. “Telephone tells all of our truths. We confess to our phones almost daily,” says Ahearn. “The mobile phone is the gold of information.” But privacy advocates worry about this new era we’re entering. “Anytime you’re looking at targeted advertising based on location, you’re inviting manipulative practices. And when it comes to political advertising, there’s a lot of room for abuse and a lot of reason to be concerned,” said Lindsey Barrett, an attorney who specializes in tech privacy issues. That’s especially true, she said, for a “Trump campaign in particular that has shown no problem whatsoever about lying and shoving really dangerous rhetoric down people’s throats.” Campaigns already know plenty about you But that’s mostly what’s coming in the future. And the present is already concerning. Democratic and GOP campaigns have focused primarily on collecting what is called “first-party data,” or data that it directly collects from voters that it considers the gold standard. That explains the vast amount of surveys, petitions, and emails that flood the inboxes and feeds of American voters — and the Trump campaign, in particular, wants to know who its most committed voters are. Purchases through the campaign store count, too. Of particular value to Trump is your cellphone number. His campaign has invested heavily in building out its text messaging list, according to a person familiar with the matter. Campaign rallies, which Trump has been holding continuously since he was first elected, reportedly require at least one attendee per group to provide their phone number to receive their tickets, supplying the Trump campaign with a robust list of its most committed backers. “We can use it as a data mining opportunity,” campaign manager Brad Parscale told one interviewer. “Turn every single one of our rally goers into a volunteer on-site for the rally. They can get out there and actually collect data. ‘Hey, who were your 10 friends? Who were your hundred friends? Tell us what your friends like about Trump. Give us their phone numbers.’ That will give us an even greater opportunity to expand the spider web of data.” On Election Day 2016, Trump had a cellphone list of 10 million numbers. By Election Day of 2020, Trump officials are aiming to have 50 million. A small text message list can in theory raise as much money as a large email list. That’s because those who have taken action, such as attending an event, or texting a campaign account, are more likely to be super supporters. Text messaging supporters is, of course, nothing new. In 2016, peer-to-peer texting — in which you receive a personal text from a volunteer or supporter, as opposed to a robotext — was a relatively new persuasion tool. But its centrality to political campaigns has expanded. “The data that’s going to be dragged from those peer-to-peer interactions could potentially provide insights into who’s my best possible persuader,” Krohn said. However, the backbone of political campaigns’ data collection continues to come from so-called second-party data: commercial information obtained from brokers. Campaigns buy data such as credit card information that illuminates the buying habits of specific voters. Datasets can be built with as many as 300 different attributes — such as whether homeowners own DVD players, whether they’ve bought crochet supplies recently, whether they smoke cigarettes, or whether they have a premium credit card — and then, when that data is run through a custom-built model and added to a party’s voter file, it can help campaigns target their likely voters. But the Trump campaign source said the campaign prioritizes obtaining first-party data since the respondent is taking initiative to interact with Trump’s messaging. Data like that is added to the Trump campaign’s voter file and determines the types of ads that you and other voters might see as you crawl across Facebook or Google. On those platforms, the Trump campaign became known in 2016 for essentially testing loads of ads simultaneously to observe infinitesimal differences in spots, essentially a “throw everything at the wall and see what sticks” strategy for the digital age. During the 2016 campaign, Trump ran 5.9 million unique ads. All this experimentation — and boundary pushing — worries many Democrats, who are likely to be locked in their own political battles well into 2020 as Trump refines his digital tactics. “They have a lot of money invested in how do we get as much data from our supporters and potential supporters as possible,” Goudiss said, “at a larger scale than the Democrats.” Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 4 days ago on re/code
Christina Animashaun/Vox Mobile apps want to know everything about you, but you can minimize what they have access to. It’s no secret that mobile apps take some, shall we say, liberties with your personal information. From Pokémon Go to FaceApp, you’ve probably heard about how apps track and collect data about what we do on them and on our devices, often without us knowing it. The good news is you have some control over your apps and have some options for protecting your privacy. The bad news: Even these have limitations. Here’s what you can do — and what you can’t. App access requests aren’t necessarily about invading your privacy, but they come with risks Apps often require access to parts of your mobile device to function — you know, their intended purpose and, presumably, why you downloaded them in the first place. If you want to post photos on Facebook, its app needs to access your photo library. If you want real-time traffic updates that follow you as you drive, Waze needs access to your phone’s location services. Speech-to-text programs need access to your microphone. That doesn’t mean they’re using that access to listen to your conversations. In fact, they probably aren’t (except for when they are, like when Facebook workers were listening to users’ audio chats). But some apps may use that access for things that aren’t required for the app’s functionality, or they want access to other device features that have nothing to do with the app. That’s how you get flashlight apps that should only need access to your camera’s flash hardware but that also collect your location data and sell it to ad networks. Drew Angerer/Getty Images A group of teens take a photo with a smartphone in Times Square on December 1, 2017. “The reason for data collection is almost always monetization,” David Choffnes, an associate professor in computer science at Northeastern University, told Recode. “This could be from advertising or from selling your data to other companies. So even when the app you download is ‘free,’ you are often paying for it — with your data.” Permissions and privacy policies can help Over the years — and typically in response to criticism over a newly discovered privacy breach — Apple and Google have taken steps to prevent unauthorized access to device features and to better inform users about what their apps access. “There’s still a whole ecosystem predicated on sucking as much information from people’s phones as possible,” Jen King, the director of consumer privacy at Stanford Law School’s Center for Internet and Society, told Recode. “Now you have a little bit more choice over denying apps access to some of this.” To that end, apps must now ask for and receive your permission to access certain device features; they don’t just have it by default. That’s why you get that little popup window when you first install an app that requests access to things like your camera, your location, or your address book. You may also be able to control when the app has access, like allowing it to track your location only when the app is in use. The popup usually also says why that access is being requested in the first place, although that explanation may leave a few things out. BuzzFeed News reported that apps may tell you they need your location data to give you a better, more personalized experience without mentioning that they also sell that data or use it to target ads. One weather app, the New York Times found, sent location data to 40 different companies while users probably assumed their location was only being tracked to get the local weather forecast. You also have privacy policies, which app stores now require apps to link out to, that explain in greater detail what data apps collect and how they use it. But those policies can be lengthy, vague, and difficult for the average person to understand. For instance, a policy might say your data could be shared with affiliates, which actually means it will be shared with advertisers and/or analytics companies. The Wall Street Journal broke down policies from Apple, Google, Amazon, and Domino’s to show just how much data could be collected from something as simple as two friends buying pizza. In the end, it found 53 pieces of information could have been collected from the two friends, who would have to spend about five hours reading more than 75,000 words of privacy policies to figure that out. No one is going to spend their time doing this, and companies know that. “Consumers aren’t given the information they need to make informed decisions, and the entities supplying that information are not incentivized to give them accurate or useful information,” Serge Egelman, research director of the Usable Security & Privacy Group at the International Computer Science Institute, told Recode. What to look out for Some of the worst privacy invaders share the same red flags. Here’s what to watch out for. Free apps are usually loaded with trackers — they have to make money somehow, after all — though you shouldn’t assume that purchased apps will protect your privacy. Egelman’s research has found that the paid versions of apps often have the same trackers as their free counterparts. “I think we are moving into an era where you can’t just assume that you pay for something and it doesn’t do that,” King said. “I think that used to be a safe assumption and it’s getting a lot less safe.” Games are often considered to be among the most invasive apps — not to mention that they’re often targeted to or made expressly for kids who should be protected by privacy laws tailored to children. If you’re downloading a game app, give its permissions requests a careful look. Zhang Peng/LightRocket via Getty Images Gaming apps are some of the most invasive when it comes to privacy. One simple thing to watch out for: You should think twice before downloading apps that ask to access features that don’t seem to have anything to do with the service they provide. “Ask yourself if the permission is commensurate with what the app is,” King said. “If it’s a weather app, does it need access to your photos or access to your address book? Not likely.” You might also want to do some research on an app’s developer. If you’re not comfortable with the Chinese government potentially having access to your data, for instance, it may interest you to know that TikTok is owned by a Chinese company. TikTok maintains that it does not store any user data in China and that it does not share user data with the Chinese government. Similarly, FaceApp is based in Russia but says it does not store user data in that country. It’s up to you to decide how much you trust those companies. It’s also a good idea to periodically check your device’s settings to see which apps you’ve given permissions to and what you’re permitting them to do. Apple and Android devices now tell you which features apps have requested access to and if you’ve granted that access — and give you the ability to change those permissions in some cases. Here’s how: For Apple devices, go to Settings > Privacy. You’ll see a list of permissions (for example: location services, contacts, or photos). Click each one to see which apps have requested access to those features and change them accordingly. You can also go to Settings and scroll down to where each of your apps is individually listed. Click on each one to see what you have given it access to and adjust as you see fit. For Android devices, go to Settings > Apps & Notifications > App Permissions. You’ll see a list of permissions. Tap each one to see which apps have access to them and then adjust. You can also go to Settings > Apps & Notifications > [App] > Permissions. Adjust accordingly. “My approach is to deny permissions to an app unless you determine that it’s absolutely necessary for the app to function,” Choffnes says. “Another approach is to limit the number of apps you install on your phone. Put another way, think twice before installing an app, and occasionally delete apps you aren’t using anymore.” But permissions have limits Much of Egelman’s research focuses on the data collection users don’t know about and can’t stop, no matter how many permissions they deny. He says permissions are “an illusion of control.” Apps simply find other ways to get the information they want, including location data. Or they use data that isn’t controlled by permissions, like unique device identifiers that allow apps to collect data about how and when you use them, which advertisers then use to better target their products. Both Apple and Android now allow you to reset advertising identifiers, and Apple devices have a “limit ad tracking” option. “The bread and butter for the tracking that occurs — profiling people and what they do — that data generally isn’t protected by the permissions system at all,” Egelman says. Egelman worked with researchers to create AppSearch, a service that can give you some idea of what your Android apps are really tracking and who they’re sending that information to. I checked it out for myself and, sure enough, many of my favorite apps were laden with trackers. Cookie Jam, I am so disappointed in you. And, of course, there are the apps that do objectionable things with your data without being transparent about it: Facebook, Brightest Flashlight, Path, and Snapchat have all settled FTC complaints on privacy violations. There’s not much you can do about that, other than boycott an app if you don’t like how it’s handled users’ data. “There are so many stories about data breaches and unexpected uses of our data that it’s hard to flag just one as the most troublesome,” Choffnes says. “I would argue the most troublesome use is the one that hasn’t happened yet.” What else you can do Egelman encourages people who are concerned about how apps collect and use their data to write to their representatives and demand better privacy laws and the enforcement of those laws. As we’ve seen again and again, tech companies aren’t going to regulate themselves. And while privacy policies are flawed, app stores require them as part of compliance with California’s online privacy law, which requires any online service that collects personally identifiable information about California residents to “conspicuously post” a privacy policy that spells out what is being collected. Without it, those policies may not exist at all. Of course, the best way to avoid having your data mined by apps is not to download them at all. You’ll just have to decide for yourself if what the app gives you is worth what it takes away. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
A scene from The Irishman. | Niko Tavernise/Netflix Who wants to watch a mob movie that runs three hours and 30 minutes? More than 26 million people, according to Netflix. The Irishman, the Martin Scorsese/Robert De Niro/Al Pacino/everyone else who’s ever been in a mob movie movie, is very long. It’s also very popular on Netflix, says Netflix. Netflix subscribers in more than 26 million homes around the world watched the movie in the first week it was available on the streaming service, Netflix announced today. The company thinks that number will hit 40 million within the first month of The Irishman’s digital debut. It’s a shiny statistic, and an important one for Netflix, which wants to prove it can be the home of big, popular movies from people you’ve heard of. That will be particularly important in the next few years, as a deluge of competitive streaming services launch their own attention-grabbing projects — and as Netflix continues working to convince Hollywood stars to bring their movies directly to the streaming service instead of showing them in theaters. *pours glass of wine**dips bread*My friends, I’ve got some news from the big guy at the top: THE IRISHMAN was watched by 26,404,081 accounts globally — within its first 7 days on Netflix. pic.twitter.com/abVV993CWS— Netflix Film (@NetflixFilm) December 10, 2019 These numbers come with caveats. For instance, Netflix measured the number of Netflix accounts that have watched the movie so far, not the total number of people. Assuming some viewers watched the movie with at least one other person at the same time, it’s likely that the 26-million and 40-million numbers are significantly lower than the movie’s total audience. On the flip side: When Netflix says someone “watched” something on the service, it means they watched at least 70 percent of the movie or TV show. In the case of The Irishman, it means they watched at least two hours and 27 minutes of the three-hour-and-30-minute movie. (Which means if they stopped watching at that point, they didn’t get to see [redacted]’s character [redacted] [redacted]’s character.) The most important caveat, though, is that we don’t really know how The Irishman’s audience compares to that of other stuff on Netflix, because Netflix doesn’t provide viewing data for most of its shows and movies, save for a handful of things they want to highlight. But we do know, for instance, that Bird Box, the Sandra Bullock thriller Netflix released a year ago, attracted 35 million views in its first week and 80 million views in its first month. We also know that Netflix thinks the numbers for The Irishman are worth bragging about. In addition to tweeting about it, Netflix also made sure that Ted Sarandos, the company’s chief content officer, highlighted the numbers at an investor conference today. The dual release indicates that Netflix is targeting at least three separate audiences when it announces numbers like these: Wall Street, which needs to decide whether the company’s stock is worth buying; Hollywood talent like Scorsese and De Niro, who need to decide whether they should work with Netflix over a different TV network, streaming service, or movie studio; and people like you and me, who may be more tempted to watch something if we know other people have watched it. If you’re the kind of person who gets frustrated by the selective release of streaming numbers — say, you work at a movie studio or TV network that competes with Netflix — you’re going to have to get used to it. Disney, which entered the streaming wars last month with the release of its Disney+ streaming service, said more than 10 million people had signed up for a subscription, but wouldn’t specify how many of those had paid versus how many had signed up for a free trial or got a free year because they’re a Verizon subscriber. Apple has yet to release any viewership or subscriber data for its TV+ streaming service, which also launched last month. Expect more minimal disclosure next year, too, when AT&T and Comcast launch their respective HBO Max and Peacock services.

Read More...
posted 5 days ago on re/code
Recode’s new multiplatform journalism project revealing the good, the bad, and the complicated of tech. Many of us have grown skeptical of tech and the multibillion-dollar companies behind it. We’re still using Google and Facebook and Amazon, but we’ve started to reconsider what we’re signing up for and what we’re giving away when we accept the terms of service for these platforms and use their products. And as this technology gets more and more embedded into our lives, it’s harder and harder to understand the real consequences when we choose between convenience and privacy, or when we consider the differences between the data we willingly share and the data we don’t know we’re giving away. That’s why Recode by Vox is launching Open Sourced, a multiplatform journalism project supported by the Omidyar Network that will expose and explain the hidden consequences of tech — the good, the bad, and the complicated. Because most of us don’t really understand either AI or digital privacy, they’re surrounded with hype and fear. Open Sourced is going to change that. The deeply personal nature of data, privacy, and algorithms is often what makes these systems so difficult to understand. One person’s experience can be radically different from another’s. And that means that to report on them well, we’ll need your help. The Open Sourced Reporting Network is an email community that will keep you up to date with the latest ways you can contribute to our reporting. (We promise to never spam you.) Please subscribe to join us on this Open Sourced journey, as we reveal tech’s hidden consequences together.

Read More...
posted 5 days ago on re/code
A worker at an Amazon warehouse south of Paris. Hundreds of workers at a Sacramento facility are demanding paid time off in a new petition. | Philippe Lopez/AFP/Getty Images Warehouse workers lifting hundreds of boxes a day say that they fear being fired for taking a day of unpaid time off. As e-commerce giant Amazon expands its network of warehouses to sort and prepare packages for “last mile” shipments in urban areas, workers around the US are pushing back against what they say are unfair working conditions. In July, delivery station workers in Chicago filed a labor complaint after they said Amazon cheated them out of overtime pay during Prime Week, when they worked extra shifts during a record heat wave. In October, workers in Minnesota at a delivery station walked out of work protesting the company’s strict time-off rules. Now a group of delivery station workers called Amazonians United Sacramento has made a public petition for paid time off, which now has nearly 400 signatures in support. The group recently circulated an internal petition signed by over 200 workers at the Sacramento warehouse, as first reported by BuzzFeed News. So far, Amazon’s site and regional management team has not met with organizers of the petition or made any changes to the PTO policy. The recent action is an escalation of labor tension at Amazon’s growing logistics network of last-mile delivery centers, which are smaller warehouses where workers prepare packages that are then sent out for delivery. Most workers at the Sacramento site — as with those at other delivery stations across the US — are prohibited from working more than 30 hours a week. They often work up to that maximum amount allotted; sometimes more than that during peak shopping times. Workers say their shifts are physically grueling, involving lifting hundreds of boxes weighing up to 50 pounds in a single day. But because they aren’t working full-time hours, most of these workers do not receive benefits such as employer-subsidized health insurance, and they can be fired for taking off more than 20 hours every quarter. Amazon publicly promises its part-time employees paid time off on its own website (notably, though, the policy differs for California employees), as well as in its employee handbook, according to documents Recode reviewed. When workers in Sacramento pointed this out to management, they say they were told the rules don’t apply to them since they are a specific subcategory of “class q” and “class m” logistics workers — a distinction they had previously never heard of and were given no explanation for. “The fact is that Amazon is a trillion dollar company run by the richest man in the world,” the Sacramento workers said in the public petition, “and they intentionally give all class q part-time workers less benefits than regular part-time workers so that they grow the company at our expense. We’ve had enough.” An Amazon spokesperson acknowledged that the company has received workers’ petition, but did not immediately respond to a question about how “class q” and “class m” workers are defined, and why these workers are not eligible for PTO like other part-time employees. The spokesperson said in a statement to Recode: “Amazon maintains an open-door policy that encourages employees to bring their comments, questions, and concerns directly to their management team for discussion and resolution. ... Benefits vary based on a variety of factors but if someone wanted to move to a role that offered regular, full-time benefits we expect to have more than a thousand of those roles in Sacramento throughout the year.” The new public letter also calls for Amazon’s local management team to meet with representatives from Amazonians United Sacramento, which has turned into the de facto organizing group for workers at the warehouse. Currently, the group says it operates as an independent worker organization not affiliated with any union. It’s also one of several worker-led groups that have organized in recent months at Amazon’s delivery stations, and it has had some recent successes in petitioning for better working conditions. In July, a group of workers in Chicago called Amazonians United DCH1 publicly came forward demanding rights for workers, including health care benefits and air conditioning on site. They had an early win when management agreed to send workers home during a sweltering heat wave that they say made it unsafe to work. Delivery station worker organizers in Eagan, Minnesota, walked out of work until their manager agreed to talk to his boss about demands related to time off. And Amazon workers in Sacramento successfully campaigned to get two colleagues rehired after they were fired for taking more unpaid time off than allowed, including one worker who says she was fired after taking off one more hour than permitted after her mother-in-law died. “Any time you want to take time off to spend with your family, you have to hope an emergency doesn’t come up so that you don’t go over your limit of unpaid time off,” said one Sacramento DSM1 worker, who added that one of the most common reasons for people getting fired at the warehouse is for taking too much unpaid time off, often to take care of their loved ones. While there have been reports of poor labor practices across Amazon’s supply chain — and incidents as severe as death on the warehouse floor — delivery station sites in particular have become a hotbed for worker activism. Unlike other larger fulfillment centers in suburban areas, these warehouses are largely in urban areas like Chicago, New York, Portland, and Sacramento. Workers are often the last ones moving around packages just before they’re put in trucks for final-mile delivery to a customer’s doorstep. These workers say that they’re arbitrarily classified as a subcategory of workers who don’t have the same benefits as their full-time colleagues at other facilities. “We’re doing back-breaking warehouse work,” said a delivery station worker in Chicago. “So whether it’s someone working 40 hours a week who gets injured or somebody working 24 hours a week, it doesn’t matter. We just wanted to be treated equally as all of Amazon’s other part-time workers.” As Amazon continues to grow an independent logistics network that will include more of these delivery stations rather than relying on contracted partners like FedEx, workers say they only anticipate a bigger fight ahead. Several workers said they were frustrated by Amazon’s statement that workers unhappy with their lack of benefits should acquire full-time jobs within the company, since many workers at delivery stations want to work more than 30 hours a week but are blocked by Amazon’s rules. Workers say that the nearest Amazon fulfillment centers with full-time jobs can be hours away. “This is what the economy is, part-time work is all we can get,” the Chicago worker said. “This is the reality of our economy, and we deserve paid time off.”

Read More...
posted 5 days ago on re/code
A video explainer on the technology that’s changing the meaning of the human face. Human faces evolved to be highly distinctive; it’s helpful to be able to recognize individual members of one’s social group and quickly identify strangers, and that hasn’t changed for hundreds of thousands of years. But, in just the past five years, the meaning of the human face has quietly but seismically shifted. That’s because researchers at Facebook, Google, and other institutions have nearly perfected techniques for automated facial recognition. This development rested on two major trends that enabled the recent explosion in machine learning: the exponential improvement in computing power and growth of digital imagery, including labeled photos of human faces. In most cases, those images weren’t created in order to train facial recognition algorithms, but they were borrowed for that purpose. The result of that research is that your face isn’t just a unique part of your body anymore, it’s biometric data that can be copied an infinite number of times and stored forever. Now that facial recognition algorithms exist, they can be effectively linked to any digital camera and any database of labeled faces to surveil any given population of people. In the video above, we explain how facial recognition technology works, where it came from, and what’s at stake. You can find this video and all of Vox’s videos on YouTube. And join the Open Sourced Reporting Network to help us report on the real consequences of data, privacy, algorithms, and AI. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
Christina Animashaun/Vox Cookies alerts are supposed to improve our privacy online. But are they? If you’ve visited a new website on your phone or computer over the past 18 months or so, you’ve probably seen it: a notification informing you that the page is using cookies to track you and asking you to agree to let it happen. The site invites you to read its “cookie policy,” (which, let’s be honest, you’re not going to do), and it may tell you the tracking is to “enhance” your experience — even though it feels like it’s doing the opposite. Cookies are small files that websites send to your device that the sites then use to monitor you and remember certain information about you — like what’s in your shopping cart on an e-commerce site, or your login information. These pop-up cookie notices all over the internet are well-meaning and supposed to promote transparency about your online privacy. But in the end, they’re not doing much: Most of us just tediously click “yes” and move on. If you reject the cookie tracking, sometimes, the website won’t work. But most of the time, you can just keep browsing. They’re not too different from the annoying pop-up ads we all ignore when we’re online. Jaap Arriens/NurPhoto via Getty Images Cookies alerts are supposed to give you more agency over your privacy. But chances are, you’re clicking yes and moving on. These cookie disclosures are also a symptom of one of the internet’s ongoing and fundamental failings when it comes to online privacy and who can access and resell users’ data, and by extension, who can use it to track them across the internet and in real life. The proliferation of such alerts was largely triggered by two different regulations in Europe: the General Data Protection Regulation (GDPR), a sweeping data privacy law enacted in the European Union in May 2018; and the ePrivacy Directive, which was first passed in 2002 and then updated in 2009. They, and the cookie alerts that resulted, have plenty of good intentions. But they’re ineffectual. “I would say they’re generally pretty useless so far,” Shane Green, CEO of private data sharing platform digi.me, told Recode. “We’re back to 1999 all over again with pop-ups everywhere, and it’s beyond annoying.” Why this, why now, briefly explained To back up a little bit, cookies are pieces of information saved about you when you’re online, and they track you as you browse. So say you go to a weather website and put in your zip code to look up what’s happening in your area; the next time you visit the same site, it will remember your zip code because of cookies. There are first-party cookies that are placed by the site you visit, and then there are third-party cookies, such as those placed by advertisers to see what you’re interested in and in turn serve you ads — even when you leave the original site you visited. (This is how ads follow you around the internet.) The rise of alerts about cookies is the result of a confluence of events, mainly out of the EU. But in the bigger picture, these alerts underscore an ongoing debate over digital privacy, including whether asking users to opt in or opt out of data collection is better, and the question of who should own data and be responsible for protecting it. In May 2018, the GDPR went into effect in Europe — you probably remember your inbox being flooded with privacy policy emails around that time. The privacy law is designed to make sure users are aware of the data that companies collect about them, and to give them a chance to consent to sharing it. It requires companies to be transparent about what information they’re gathering and why. And individuals get the right to access all their personal data, control access and use of it, and even have it deleted. (Vox has a full explainer on the GDPR from 2018.) After the GDPR went into effect, a lot of websites started adding cookie notifications. But GDPR actually only mentions cookies once. It says that to the extent that they are used to identify users, they qualify as personal data and are subject to the GDPR, which lets companies process data as long as they get consent or have what regulators deem a “legitimate interest.” But it’s not just GDPR that governs cookies — it’s also the European ePrivacy Directive, which was last updated about a decade ago. The directive is sometimes known as the “cookie law” and lays out guidelines for tracking, confidentiality, and monitoring online. Currently, Europe is trying to enact the ePrivacy Regulation, which would supplant the directive and put in place across-the-board regulations for the EU instead of having them handled country by country. Right now, the GDPR and ePrivacy Directive share governance over cookie regulations. But whether the law passes or not, cookie alerts aren’t going away anytime soon. “The GDPR is one shoe, and the other shoe is this ePrivacy Regulation, which is on the way,” said Amy Brouillette, research director of New America’s Ranking Digital Rights project, which promotes free expression and privacy online. Most companies are throwing cookie alerts at you because they figure it’s better to be safe than sorry When the GDPR came into effect, companies all over the globe — not just in Europe — scrambled to comply and started to enact privacy changes for all of their users everywhere. That included the cookie pop-ups. “Everybody just decided to be better safe than sorry and throw up a banner — with everybody acknowledging it doesn’t accomplish a whole lot,” said Joseph Jerome, former policy counsel for the Privacy & Data Project at the Center for Democracy & Technology, a privacy-focused nonprofit. Jaap Arriens/NurPhoto via Getty Images Cookies pop-ups worsen user experience without doing anything really productive in return. It’s certainly a good thing that tech companies and website owners are being more transparent with users about what they’re doing with their data and how they’re tracking them. And the GDPR and the heavy fines it threatens have caused some companies to clean up their practices around issues such as breach notifications. After GDPR, there has been “less egregious sharing and abusing of data across the board and in Europe,” Green said. But when it comes to cookies, these pop-up notifications aren’t doing much. The internet and its biggest websites are constructed in a way that gives these sites easy access to users’ data, and they can essentially do whatever they want with it. And, frankly, we’re abetting this behavior. Most users just click or tap “okay” to clear the pop-up and get where they’re going. They rarely opt to learn more about what they’re agreeing to. Research shows that the vast majority of internet users don’t read terms of service or privacy policies — so they’re probably not reading cookie policies, either. They’re many pages long, and they’re not written in language that’s simple enough for the average person to understand. There’s not even a consensus on whether or not cookie alerts are compliant with European law. In May, the Dutch data protection agency said these disclosures do not actually comply with GDPR because they’re basically a price of entry to a website. “Until there’s an enforcement action or a regulator puts out an actual guidance document and says, ‘Here’s what we want and what we think people will read,’ you’ll have this gross user experience,” Brouillette told Recode. Are there better solutions? Maybe, but no one can agree on what they are. On the one hand, users should know what they’re getting into and what companies are tracking about them when they go to a website. On the other hand, asking them to check a box when they have very little idea what they’re agreeing to — and not giving them any other viable options — doesn’t seem to be an ideal solution. It worsens the user experience without doing anything very productive in return. This, again, reflects a more fundamental shortcoming when it comes to privacy and data collection on the internet. So what would be a better answer? Green suggested perhaps some seal of approval or ratings system that would signal to users that a website follows good privacy practices. Of course, we would have to decide who sets those standards — the public sector, the private sector, or some combination — and what the standards should be. And it’s going to be tough to find a consensus. Jerome pointed to the transparency and consent framework put forth by the Interactive Advertising Bureau, or IAB, an industry trade group that researches interactive advertising and develops standards and best practices for complying with EU rules. “That’s not necessarily the solution … but we do need some sort of standardization here,” he said. Johnny Ryan, chief policy and industry relations officer at Brave, a privacy-oriented web browser, said he thinks the IAB’s framework is actually harmful. “You’re essentially cutting corners on what they show you when they ask for your okay, and in many cases, on top of that, they’re not letting you say no,” he said. Ryan said he believes the GDPR has resulted in a “game of chicken” between the tech industry and regulators, where companies are trying to see what they can get away with and doing the bare minimum — without taking meaningful action or, often, actually complying with the law. “The GDPR is very good as a piece of paper; it’s almost perfect. But it hasn’t been enforced,” he said. Beyond what’s happening in Europe, there is also an online privacy movement in the US and some potential legislation that could someday change the way data collection works online, including when it comes to cookies. For example, Rep. Ro Khanna (D-CA) has proposed an Internet Bill of Rights, a list of user protections in the digital age, and Senate Democrats have introduced the Consumer Online Privacy Rights Act (COPRA), which seeks to expand digital privacy rights and protections in a way that is similar to GDPR. With Republicans in control of the Senate and few things moving through Congress, it’s not clear when or if either of these ideas would become law. But at the state level, the California Consumer Privacy Act (CCPA), a law meant to protect privacy rights and improve consumer data protection, will go into effect on January 1 in the state. But, for now, we’re stuck with these cookie pop-ups that make online browsing more difficult without accomplishing much else. Could we click through to see what’s being tracked about us? Sure. And might some websites still work if we say no to the cookies? Perhaps. But most of us are just going to keep saying yes. “We’re going to be bedeviled by banners for a long time,” Jerome said. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
Zac Freeland/Vox This map shows how widespread the use of facial recognition technology has become. There’s a good chance if you live in the US that at some point you’ve been watched, scanned, or analyzed by facial recognition technology — potentially without even realizing it. Across the country, government use of the technology — which identifies people by matching unique characteristics of their facial patterns to databases of images — is on the rise. Critics say it poses a serious threat to Americans’ privacy by enabling rapid and unwarranted monitoring of citizens. But the extent of facial recognition has been, until recently, relatively private from the public. That’s why researchers are increasingly trying to quantify just how widely it’s being used. A map from the digital rights advocacy group Fight for the Future visualizes just how often US law enforcement agencies use this software to scan through millions of Americans’ photos — often without their knowledge or consent. While the map isn’t exhaustive, it is one of the most comprehensive guides showing how common facial recognition use is becoming. The map compiles existing data from the Center on Privacy and Technology at Georgetown Law, news reports, press releases, and other sources. Some examples: In several states, including Texas, Florida, and Illinois, the FBI is allowed to use facial recognition technology to scan through DMV databases of drivers’ license photos. In many US airports, Customs and Border Protection now uses facial recognition to screen passengers on international flights. And in cities such as Baltimore, police have used facial recognition software to identify and arrest individuals at protests. Fight for the Future A screenshot of Fight for the Future’s map detailing instances of facial recognition technology use and challenges across the US. Fight for the Future’s map shows where and how the government is applying facial recognition, as well as legislative challenges to the tech’s rollout. It’s part of the organization’s campaign for a nationwide ban on facial recognition technology — something lawmakers across the US have begun to explore. While Congress hasn’t introduced any legislation for an all-out ban, lawmakers on both sides of the aisle have raised concerns about the technology in recent oversight hearings. In the past few months, three cities — San Francisco; Somerville, Massachusetts; and now Oakland, California — have all banned local law enforcement from using facial recognition technologies. “I think there’s something really visceral about the idea of having your face scanned, and having a cold, emotionless piece of software make decisions that have a profound impact on people’s lives,” Evan Greer, deputy director of Fight for the Future, told Recode. Bill O’Leary/The Washington Post via Getty Images Station Manager Chad Shane of SAS airlines, ushers a boarding passenger through the process as Dulles airport officials unveil new biometric facial recognition scanners in Dulles, Virginia on September, 6, 2018. Many police departments are eager to use facial recognition tools, saying they can help them more efficiently identify and arrest criminals. In a high-profile win for the technology last year, Maryland police used facial recognition technology to help correctly identify the suspect in the deadly Capital Gazette newspaper shooting. The Department of Homeland Security has also maintained that the technology can help the government more quickly screen travelers and process immigration. But researchers, privacy activists, and many elected officials warn about its risks. Critics worry that pervasive use of facial recognition technology could have a chilling effect on free speech if people feel they’re constantly being watched. They point to China, where the government uses the software to track — and imprison — the country’s Uighur religious minority. They also point out that facial recognition technology has been proven to perpetuate existing biases against women and minorities. There’s still a lot of secrecy around how and where exactly facial recognition technology is used. Elected officials have criticized how ICE and the FBI scan state drivers’ license photo databases with facial recognition tech without citizens’ consent, turning “state departments of motor vehicles databases into the bedrock of an unprecedented surveillance infrastructure,” as the Washington Post first reported. Fight for the Future’s map also details more than 600 local police partnerships with Ring, Amazon’s video surveillance doorbell that includes a social media component. Police departments have said that Ring’s technology helps them crack down on package theft — which is on the rise — as well as other local crime. In the past several years, Ring has become one of the most tangible examples of the threats of private and public sector teaming up to create an extensive network of high tech surveillance technology. Even though the company says it’s not using facial recognition technology now — outside reports reveal that it appears to have plans to do so. In 2018, it was revealed that the company filed patents for facial recognition tech that could identify “suspicious” people and then alert police. And in November, the Intercept reported on internal company documents planning a facial-recognition backed neighborhood “watch list” along with other proactive suspect detecting features. Its parent company Amazon conceded that it’s been a “contemplated,” but unreleased, feature after a back and forth with Congress over its efforts. The company has told Recode “Ring does not use facial recognition technology,” and that it does not collaborate with Amazon’s controversial facial recognition tech, Rekognition. As to whether that means Ring will use facial recognition in the future, a Ring spokesperson told Recode in October, “[A] patent filing application does not necessarily indicate current development of products and services, and like many companies, Ring files a number of forward-looking patent applications that explore the full possibilities of new technology. Ring takes the privacy and security of its customers’ extremely seriously and privacy and security will always be paramount when Ring considers applying any patents to its business or technology.” When Recode asked Ring again in December about future plans, a company spokesperson declined to answer. So far, promises from Ring haven’t stopped politicians, civil liberties leaders, and concerned community members from fearing for the future of video-enabled surveillance technology. It’s frightening to many that Ring is already deeply embedded with many local police departments, with no comprehensive regulation around it. “Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” said Sen. Ed Markey (D-MA), who has launched an investigation into the company’s efforts, in a recent statement. As the extent of the growing network of government uses of facial recognition technology expands, so will the scrutiny. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
Christina Animashaun/Vox Help us report on the real consequences of data, privacy, algorithms, and AI. Vox and Recode take a wide look at how technology is changing — and changing us in the process. Open Sourced will zoom in even further. The new frontiers of data, privacy, algorithms, and artificial intelligence are closed-box ecosystems — often scary, mystifying, or simply impenetrable — built by people who speak and code in a very different language. Our year-long reporting effort will illuminate what these systems are, how they’re built, why they matter, and their potential risks and benefits. Due to the nature of these systems, one person’s experience can be radically different from another’s, which means that we’ll need your help to truly explain them well. We might ask you to send us a screenshot of the targeted ads you see on Instagram, or a list of the topics Twitter has identified as being of interest to you. We might want to know if you’ve been seeing a slew of ads from 2020 candidates on your phone, or if LinkedIn seems like it’s trying to tell you something. Join the Open Sourced Reporting Network for updates on how to get involved with our reporting and be the first to know what we discover throughout the year, along with insights from our reporters on the process. We’re kicking off the reporting network by gathering your questions and concerns about your personal privacy and the technologies you use every day. We want to hear your questions about things like: How does the parking garage track your car’s parking spot? Where is Facebook storing the data from its facial recognition technology? How does Google Maps seem to always know where you’re going? In line with the mission of this project to reveal the hidden aspects of tech, we’ll be extra transparent with our own crowdsourcing by explaining where your responses go and what we’re doing with the information. The language in the Google Form below has been reviewed by our legal team and follows Vox Media’s own privacy policy. Your responses are fed into a Google spreadsheet connected to the form. We send that spreadsheet over to the Vox Media IT team, which adds further privacy protections. “We monitor for changes to access and [for] activities such as downloading and sharing,” says Isaac Teklehaimanot, Vox Media’s director of information security. “If we notice activities that seem anomalous or sharing with individuals outside the scope defined by the stakeholders, we notify the owner.” Another aspect of privacy protections that Isaac and the IT team helped us think about is the European Union’s General Data Protection Regulation, or GDPR. It’s a 2018 privacy law designed to make sure users know about (and understand) the data that companies collect about them, and that they consent to sharing it. Basically, if you’re from the EU and interested in filling out our form, you have to opt in to give us your information and “expressly agree to Vox Media’s GDPR terms for all your existing agreements with Vox Media,” according to our privacy policy. You can find the full text of that policy and more details on your individual rights here. The spreadsheet remains private, with access granted only to a handful of individuals within our company who need to see your responses: reporters, editors, IT support, and your friendly community manager (that’s me). Google, of course, has its own terms of service for Google Drive, which includes policies like “We will not change a private document into a public one” and “You can take your data with you if you choose to stop using Google Drive.” Google dives further into your privacy controls and what information it collects in its extensive privacy policy. We will use a third-party email platform called Campaign Monitor. Your first name, last name, and email address you provided to us in the form will be uploaded into that system, so that we can send you emails. You can read Campaign Monitor’s privacy policy here. Now that you know why we need your help — and what we’re doing with the info you share with us — please join our community of people helping to reveal the hidden consequences of tech. Loading… Watch this Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
Christina Animashaun/Vox Recode’s new multiplatform journalism project explains and exposes the hidden consequences of tech — the good, the bad, and the complicated. A reckoning has come for tech — and for the rest of us, too. Not so long ago, tech inspired optimism. It was revitalizing the economy, connecting people around the world, making our lives more convenient, innovating health care, and even helping to spread democracy. Tech companies like Google, Facebook, and Amazon weren’t quite as big as they are in 2019, and most people seemed to think their rapid growth was a good thing. They were changing the world and they weren’t being evil as they did it, or at least that’s what their corporate slogans promised. But in recent years, there’s been a shift: Many of us have grown skeptical of tech and the multibillion-dollar companies behind it. We’re still using Google and Facebook and Amazon, but we’ve started to reconsider what we’re signing up for and what we’re giving away when we accept the terms of service for these platforms and use their products. And as this technology gets more and more embedded into our lives, it’s harder and harder to understand the real consequences when we choose between convenience and privacy, or when we consider the differences between the data we willingly share and the data we don’t know we’re giving away. That’s why Recode by Vox is launching Open Sourced, a multiplatform journalism project supported by the Omidyar Network that will expose and explain the hidden consequences of tech — the good, the bad, and the complicated. We’ll do this with written stories and explainer videos demystifying aspects of technology that are the most controversial and the least understood: artificial intelligence and personal data and privacy — and we will need your help to do it. (More on that below.) So much of what happens to our data happens inside a black box; we don’t control it and we don’t know what exactly is being collected, who has access to it, and what it’s being used for. And few of us truly understand the artificially intelligent technology that’s being introduced to our lives, from the Alexa smart assistants that are listening in our homes to the systems that are screening our job applications, surveilling our faces, and trying to influence our political discourse — and even our votes. Because most of us don’t really understand either AI or digital privacy, they’re surrounded with hype and fear. Open Sourced is going to change that. For starters, we’re making a pledge of transparency — decoding our own privacy policy and putting it in plain English. We’ll explain what cookies — those little bits of sticky data that follow you around the internet — really are. In both video and text, we’ll look at the new frontier of facial recognition and explore how surveillance is changing the way we live. We’ll dive into how AI will be used to filter your next job application, and whether it helps level the playing field or raises new barriers. We’ll look at how ad microtargeting on social media platforms and how it could influence your vote in the 2020 elections. And that’s just for starters. Almost everything in life involves trade-offs. It’s no different with technology. AI has the potential to make our lives more efficient, more convenient, even healthier — but concerns abound over how biases coded into the algorithms powering this tech could make life harder for the most vulnerable people in our society. Tech platforms that store your thousands of photos, send your emails, and seamlessly connect you with loved ones around the world may not cost a cent to use, but they aren’t free: You’re paying them with your intimate data and sacrificing your privacy. This can all get pretty confusing. Frustrating, even. Having a reflexive reaction to this new tech frontier has become all too common: Some reject the exciting possibilities tech offers us; others blame Facebook, Google, and Amazon for society’s failings; still others resign themselves to living in a post-privacy world where robots will eventually take our jobs and police us. And many of us just assume the introduction of profoundly life-changing technology is still a long way off. Open Sourced will offer another option: explaining the risks and benefits when it comes to AI and digital privacy so you can make informed decisions. Better understanding can empower us to demand more of tech companies and of our political representatives in regulating these online behemoths, which the law still hasn’t caught up with yet. And as for that life-changing technology, we think it’s already here — it’s just sometimes hard to see. That brings this all back to a bigger point that our colleagues Kara Swisher and Ezra Klein wrote about earlier this year: Every story has become a tech story. Technology might seem impersonal and impenetrable, which can make its consequences seem distant and theoretical. But what’s physically closer to you, day in and day out, than your phone? What would someone find out about you if they could sift through your email and messaging inboxes, browse all your Amazon orders, or read your Google search history (including the stuff you looked up while you were in incognito mode)? Whether you’re talking about politics, business, or culture, it’s all connected to tech. And it’s all deeply personal. Open Sourced will illuminate these connections. Join the Open Sourced Reporting Network The deeply personal nature of data, privacy, and algorithms is often what makes these systems so difficult to understand. One person’s experience can be radically different from another’s. And that means that to report on them well, we’ll need your help. The Open Sourced Reporting Network is an email community that will keep you up to date with the latest ways you can contribute to our reporting. (We promise to never spam you.) The tasks we’ll need help with will change, as our reporting and our stories evolve. But we’re starting at ground zero: What are your biggest questions about the technologies you use every day? We want to hear your story about how Google Maps or Uber already seemed to know where you wanted to go, or how that Spotify playlist suggestion felt oddly dead-on. We want to know if you’ve been seeing particular online ads all of a sudden, or if LinkedIn seems like it’s trying to tell you something. We can’t promise all the answers, but we’ll ask the right questions and report what we find out. That’s our promise to you. Please subscribe to join us on this Open Sourced journey, as we reveal tech’s hidden consequences together. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
Christina Animashaun/Vox The future of police surveillance doesn’t have to be scary. But government and citizens need to step up. Everybody’s afraid of facial recognition tech. Civil liberties activists warn that the powerful technology, which identifies people by matching a picture or video of a person’s face to databases of photos, can be used to passively spy on people without any reasonable suspicion or their consent. Many of these leaders don’t just want to regulate facial recognition tech — they want to ban or pause its use completely. Republican and Democratic lawmakers, who so rarely agree on anything, have recently joined forces to attempt to limit law enforcement agencies’ ability to surveil Americans with this technology, citing concerns that the unchecked use of facial recognition could lead to the creation of an Orwellian surveillance state. Several cities, such as San Francisco, Oakland, and Somerville, Massachusetts have banned police use of the technology in the past year. A new federal bill was introduced earlier this month that would severely restrict its use by federal law enforcement, requiring a court order to track people for longer than three days. And some senators have discussed a far-reaching bill that would completely halt government use of the technology. But the reality is that this technology already exists — it’s used to unlock people’s iPhones, scan flight passengers faces instead of their tickets, screen people attending Taylor Swift concerts, and monitor crowds like at Brazil’s famous Carnival festival in Rio de Janeiro. Its prevalence has created a delicate situation: proponents of the tech, such as law enforcement and technology manufacturers, downplay facial recognition’s power. They play up its potential to crack open cold criminal cases or reunite missing children with their families. Stefan Rousseau/PA Images via Getty Images A man uses the facial recognition feature on an iPhone X in a London Apple Store in 2017. Meanwhile, opponents warn of how quickly the powerful tech’s use could spiral out of control. As an example, they point to China, where the technology is regularly used to surveil and oppress an ethnic minority. The solution may be somewhere in between — there are cases when use this tech can do good, especially if it’s carefully regulated and the communities impacted by it are in control of how it’s used. But right now, that looks like an ideal scenario that we’re still far from achieving. “What we really need to do as a society is sort through what are the beneficial uses of this technology and what are the accompanying harms — and see if there are any roles for its use right now,” Barry Friedman, faculty director of NYU Law’s Policing Project, a research institute that studies policing practices, told Recode. Rolling out government use of facial recognition the right way, tech policy leaders and civil liberties advocates say, will involve a sweeping set of regulations that democratize input on how these technologies are used. Here are some of the leading ways that the US government is using facial recognition today, and where experts say there’s a need for more transparency, and for it to be more strongly regulated. Everyday police use The most famous examples of law enforcement’s use of facial recognition in the US are the extreme ones — such as when police in Maryland used it to identify the suspected shooter at the Capital Gazette newspaper offices. But the reality is, as many as one in four police departments across the US can access facial recognition according to the Center on Privacy and Technology at Georgetown Law. And at least for now, it’s often in more routine criminal investigations. “We haven’t solved a murder because of this — but there’s been lots of little things,” said Daniel DiPietro, a public information officer at the Washington County, Oregon police department. Washington County was one of the first police departments in the country to use Amazon’s facial recognition product, called Rekognition, in its regular operations, beginning in 2017. DiPietro referenced a case where the police department used a screenshot from security video footage to search for someone who was accused of stealing from a local hardware store. Last year, the county says it ran around 1,000 searches using the tool — which it says it only uses in cases where there is reasonable suspicion that someone has committed a crime. The department doesn’t measure how many of those searches led to a correct or incorrect match, according to DiPietro. David McNew/AFP/Getty Images A live demonstration using artificial intelligence and facial recognition in the Horizon Robotics exhibit at the Las Vegas Convention Center in January 2019. Here’s how it works in Washington County: If officers have a photo, oftentimes from security camera footage, of someone who has committed a crime, they can run it against the jail booking database, and turn up potential matches in a matter of seconds. Before, the department says this process used to take days, weeks, or longer — as police would search manually through a database of 300,000 booking pictures, rack the brains of hundreds of colleagues, or send media notices to try to identify suspects. DiPietro told Recode that officers only use the tools when there’s probable cause that someone has committed a crime, and only matches it to jail booking photos, not DMV databases. (This sets Washington County apart — several other police departments in the US do use DMV databases for facial recognition searches.) He also said the department doesn’t use Rekognition to police large crowds, which police in Orlando, Florida, tried to do — and failed to do effectively, after running into technical difficulties and sustained public criticism. The Washington County police department made these regulations at will, in part it says because of conversations it had with members of the community. Their rules are a step toward transparency for the department, but exist in a broader piecemeal and self-mandated landscape of rules and regulations. And like with most other police departments who use facial recognition, critics say there’s often little oversight to make sure that officers are using the tool correctly. A report from Gizmodo last January suggested that Washington County police were using the tool differently than how Amazon recommended and had lowered the confidence threshold for a match to below 99 percent. In the absence of facial recognition regulation, it’s easy to see the potential for overreach. In an interview with tech media company SiliconANGLE from 2017, Chris Adzima, a senior information systems analyst for the department, spoke about how video footage can enhance the tool’s capabilities — even though the department currently says it has no plans to use video in its surveillance, for now. Washington County is just one of hundreds of law enforcement agencies at the local, state, and federal level that use facial recognition. And because it uses Rekognition — a product made by Amazon, perhaps the biggest and most scrutinized tech giant — police there have been more public about its use than other law enforcement agencies that use similar, but less known, tools. Some law enforcement agencies are simply worried that sharing more information about the use of facial recognition will spark backlash, Daniel Castro, vice president of the DC-based tech policy think tank, Information Technology and Innovation Foundation (ITIF), told Recode. “I’ve heard from at least one law enforcement agency saying ‘we’re doing some of this work but it’s so contentious that it’s difficult for us to be transparent, because the more transparent we are, the more questions are raised.’” Castro said. Much of the fear about facial recognition technology is because the public knows little about how it’s used, or whether it’s been effective in reducing crime. In absence of any kind of systemic federal regulation or permitting process — the little we do know is from stories, interviews, public reports, and investigative reports about its prevalence. And even for police departments that are forthright about how they use the technology, like Washington County, they often don’t collect or share any tangible metrics about its effectiveness. “Too often we are relying on anecdotes without knowing how many times it isn’t successful — what’s missing from this debate is any kind of empirical rigor,” Friedman told Recode. Friedman said that with better data, the public might have a better understanding of the true value of facial recognition technology, and if it’s worth the risks. The bias problem For racial minorities and women, facial recognition systems have proven disproportionately less accurate. In a widely cited 2018 study, MIT Media Lab researcher Joy Buolamwini found that three leading facial recognition tools — from Microsoft, IBM, and Chinese firm Megvii, were incorrect as much as a third of the time in identifying the gender of darker skinned women, as compared to having only a 1 percent error rate for white males. Amazon’s Rekognition tool in particular has been criticized for displaying bias after the ACLU ran a test on the software that misidentified 28 members of Congress as criminals, disproportionately providing false matches for black and Latino lawmakers. Amazon has said that the correct settings weren’t used in the ACLU’s test because the organization set the acceptable confidence threshold to 80 percent — although it was later reported that this is the default setting in the software, and one that some police departments seem to be using in training materials. Steven Senne/AP Massachusetts Institute of Technology facial recognition researcher Joy Buolamwini holds a white mask she had to use so that software could detect her face. Presumably, bias issues in facial recognition will improve over time, as the technology learns and data sets improve. Meanwhile, proponents argue that while facial recognition technology in its current state isn’t completely bias-free, neither are human beings. “[People] want to compare what we’re doing with some perfect status quo, which doesn’t exist,” said Eddie Reyes, the director of public safety communications for 911 in Prince William County, Virginia, who spoke at a recent ITIF panel. “Human beings can be biased, human beings make mistakes, human beings get tired … facial recognition can do things much better.” But that’s not necessarily true, critics argue: When human beings with innate, even unconscious, biases build algorithms and feed those algorithms data sets, they amplify their existing biases in the tech they build. And facial recognition can be harder to hold accountable than a human being when it makes a mistake. “If an individual officer is discriminating against a person, there’s a through line or a causal effect you can see there, and try to mitigate or address that harm,” said Rashida Richardson, director of policy research at AI Now Institute, “But if it’s a machine learning system, then who’s responsible?” The technology that determines a match in facial recognition is essentially a black box — the average person doesn’t know how it works, and often the untrained law enforcers using it don’t either. So unwinding the biases built into this tech is no easy task. Just trust us Another hurdle facial recognition tech will have to clear: Convincing communities they can trust their police departments to wield the powerful tool responsibly. Part of the challenge is that in many cases, public trust in police officers is divided, especially along racial lines. “It’s easy to say yes, ‘we should trust police departments,’” said Richardson, “but I don’t know of any other circumstances in government or private sector where ‘just trust us’ is a fair model. If an investor would say, ‘Just trust me with your money, trust me’ — no one would think that’s reasonable, but for some reason under law enforcement conditions it is.” Elaine Thompson/AP Demonstrators hold images of Amazon CEO Jeff Bezos during a Halloween-themed protest at Amazon headquarters in Seattle over the company’s facial recognition system, “Rekognition,” in 2018. Some tech companies, such as Microsoft and IBM, have called for government regulation on the technology. Amazon said earlier this year that it’s writing its own set of rules for facial recognition that it hopes federal lawmakers will adopt. But that raises the question: Should people trust companies any more than police to self-regulate this tech? Other groups such as the ACLU have created a model for local communities to exert oversight and control over police use of surveillance technology, including facial recognition. The Community Control Over Police Surveillance laws, which the ACLU developed as a template for local regulation, empowers city councils to decide what surveillance technologies are used in their area, and mandate community input. More than a dozen cities and local jurisdictions have passed such laws, and the ACLU says efforts are underway in several others. Overall, there may be benefits of law enforcement’s use of facial recognition technology — but so far, Americans are relying on police department anecdotes with little data points or accountability. As long as police departments continue to use facial recognition in this information vacuum, the backlash against the technology will likely grow stronger, no matter the potential upside. Passing robust federal level legislation regulating the tech, working to eradicate the biases around it, and giving the public more insight into how it functions, would be a good first step toward a future in which this tech inspires less fear and controversy. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
Facebook CEO Mark Zuckerberg testifies before the House Energy and Commerce Committee in 2018. | Yasin Ozturk/Anadolu Agency/Getty Images “I think Facebook is the most afraid”: an interview with former Facebook security executive Alex Stamos. Facebook started out as Mark Zuckerberg’s dorm room project. Now it’s a company that boasts more than 2 billion users and tens of billions of dollars in advertising revenue. Those two data points are tightly linked, and together they create Facebook’s world-shaping power: Facebook users provide the company — knowingly or not — with an enormous amount of data about themselves. And Facebook uses that data to let advertisers reach those users with astonishing precision and effectiveness. For years, political advertisers ignored Facebook (along with most of the internet) but that’s not the case anymore: Digital political ad spending may approach $3 billion in 2020 — about a third of the money politicians and their campaigns will spend in 2020. Those ads are now a problem, not a positive, for Facebook. Critics are furious over the company’s policy that allows politicians and their campaigns to lie in Facebook ads, even as Zuckerberg and other top executives say they’re trying to promote free expression. Ben Stansall/AFP/Getty Images A photo illustration shows the Facebook page of Britain’s Conservative Party Leader and Prime Minister Boris Johnson on December 6, 2019. As Facebook defends its ad policy, Twitter has announced that it will ban all political ads, and Google has announced that it will limit targeting for its political ads. Facebook seems very unlikely to ban political ads altogether, but it has signaled that it may make other changes that might make its advertising machine less effective, in some ways, when it comes to politics. If all of this sounds confusing to you, you’re not alone. That’s why we’ve asked former Facebook executive Alex Stamos to help out. Stamos spent three years as Facebook’s chief security officer, helping the company fend off all kinds of attacks. But after leaving Facebook in 2018, Stamos has become a sort of Facebook attacker, offering a steady stream of criticism of the company and its leadership — for instance, he would like CEO Mark Zuckerberg to step down. Stamos has also been a loud and consistent voice calling on Facebook, and other digital advertising companies, to make a very specific reform: He wants them to place limits on the microtargeting, or hypertargeting, they offer political ad buyers. We talked to Stamos, now the director of the Stanford Internet Observatory, about his problem with ad targeting, why he thinks Facebook’s Cambridge Analytica breach was overblown, and what you, a person who cares about security but isn’t a security expert, can do to protect yourself. The following transcript of our conversation has been edited for length and clarity. Peter Kafka Can you explain what micro- or hypertargeting is, and then why you’d like to see limits on it? Alex Stamos When I think of microtargeting, I’m mostly thinking of what are called custom audiences by Facebook and a variety of other names by Google and Twitter. But the same product is available from a number of companies, which is an advertising product where you upload data to the advertising companies, and then they match up that data with their customer base. The canonical example of this is a car dealer. You go look at a Toyota truck, the dealer gets your email address. You don’t buy it, and so they end up uploading [your email address] to Facebook and Google, saying, “I want to show Peter my ad.” And everywhere you go online, you now see ads for that truck that you looked at and didn’t buy. Peter Kafka They are showing this to me, or someone who looks and acts a lot like me? Alex Stamos Literally to you. It’s usually by phone number or email address. From my perspective, the most dangerous component is the data upload, for two reasons. One, it motivates political actors to build humongous databases on users. From my perspective, the real Cambridge Analytica scandal is not data being stolen from Facebook. That’s an issue, but there’s been a bunch of data breaches that have been much more important that people will never talk about. The part of Cambridge Analytica that is truly concerning is the ability to use data that you get through a variety of means to microtarget ads. There are effectively dozens of Cambridge Analytica[s] that still exist. They’re just not dumb enough to steal data from Facebook. They just buy it from Acxiom, and they apparently buy it from the California DMV. There are a ton of data brokers that know a bunch about people. So one, we probably don’t want to create these companies whose entire job is to figure out how to manipulate people. But the second is it allows political actors — that’s campaigns, PACs, parties — to have messages that are extremely finely targeted to a very small number of people. Therefore they can be somebody different to everybody. So to 100 people in northern Michigan, they can look different to 200 women in Manhattan, and look different to 100 African American voters in Atlanta. We don’t want our politicians to be different people to everybody. It also makes it much harder to call them out on lies. If you allow people to show an ad to just 100 folks, and then you run tens of thousands of ads, then it makes it extremely difficult for your political opponent and the print media to call you out. Peter Kafka Are you opposed to that kind of advertising, period? Is that acceptable if it’s for Toyota or Nike, as opposed to Donald Trump? Alex Stamos When it comes to other advertising, I think the way to handle it is through comprehensive private privacy legislation. Because there’s really a supply of data that is gathered up and used to microtarget people. And then there’s the demand of the advertising. Peter Kafka One of the arguments you hear in favor of ad targeting is that we’ve had targeting forever. There’s direct mail, and this is another version of it. Yes, computers are now involved. And yes, it’s happening at scale. But if you thought it was okay to mail me something to my house, why can’t you target me on my laptop? Alex Stamos I think the direct mail argument is why we should be making moral arguments against ad targeting. I think the practical issue is that online ad targeting is so much cheaper, and so it’s just not cost-effective to generate 10,000 different direct mail advertisements, and then see how it impacts people. Win McNamee/Getty Images Alex Stamos, chief information security officer at Yahoo! Inc testifies before the Senate Homeland Security Committee on May 15, 2014. Peter Kafka You’ve been calling on Facebook to change their approach to political ads for a while. Twitter and Google have agreed to do it in varying degrees. Do you think it’s particularly challenging for Facebook to make a decision on this? They’ve been sending out mixed messaging. Alex Stamos Google and Twitter are interesting because they have very different responses. I think the Twitter response is not the right one. I think completely banning political ads puts them in a very difficult situation because you’re not creating a speed bump. So if you look at what we did before the 2018 midterms, all three of those companies created speed bumps for political ads, where you had to verify who you are. And even with the fact that all you have to do is submit a driver’s license, the media was full of complaints from people who said they’re being censored because they had to verify their accounts. And that was just for a little bit of an inconvenience. Twitter is completely banning political ads, which motivates all kinds of political actors to run to the media immediately and say that they are being oppressed. Which is also a really powerful message. The Warren campaign has used this really effectively by intentionally violating Facebook’s rules, getting ads banned, and then turning around and for $100 worth of advertising, getting millions of dollars in free media. The other [approach] is what Google used to not do, which Google is going to do now and Twitter is [also] doing now: defining political ads as issue ads. The problem is, in 2019, everything’s political. Right? And so that’s the other challenge that Twitter’s going to have: Is a Nike ad with Colin Kaepernick political? Is Exxon running an ad that kind of gently hints that they’re doing everything they can about global warming, is that a political ad? And in those decisions, you will have lots of unconscious bias amid the people who are making those decisions and that’s going to be something that’s fought over. Peter Kafka But Google did agree to some kind of limits on targeting. Alex Stamos Right. I think the Google approach is a smarter one. Creating a political ad/issue ad standard, and then doing targeting is exactly what I’d like to see Facebook do. Because at least it’s much less oppressive to people to say, “Okay, well, you can’t do a data upload.” And also, for the most part, like if you’ve got some little anti-global warming NGO, they’re not running a massive campaign. I mean, they’re not running a super complex data upload campaign. And so you’re actually kind of evening the playing field a little bit between the small actors and the large actors. Banning them totally, I think, actually pushes the power to the large actors, because Exxon can afford TV ads; they can afford full-page ads in the New York Times. And the little NGOs can’t. Peter Kafka But you have some political actors on the left who have relatively small war chests, saying, “This is going to hurt us.” Alex Stamos The Democratic Party has made a huge mistake in making a big deal of [calling for Facebook to restrict political ads], because Donald Trump has something like 67 million Twitter followers; he has 24 million “Likes” on his Facebook page. Breitbart, Fox News — he’s got this massive, free media ecosystem that is pushing all of his stuff without paying for it. So to cut off the Democratic challenger — who is going to come out of the bruising primary — they’re going to have to pivot to the main to get their message out. [And]to do so against Donald Trump’s organic reach, which is both people who do it intentionally and then everybody in the media — that every time he tweets something crazy, ends up covering the controversy — one of the ways you can try to even that out is through much more economically efficient online ads. So I think there is a trade-off here. I think there’s a reasonable trade-off in limiting the targeting. But I don’t think we should get rid of political ads overall. I think that pushes power back to the people who had power, back when radio and TV ads were most effective. Online advertising gives a huge amount of power to nonincumbents, NGOs, small organizations, unions. It empowers people that have less money because the entry cost of running an ad is not hundreds of thousand dollars of production cost like it is for TV. Peter Kafka Back to Facebook. I’m assuming at some point they’re going to come out and say, “Here is what we have decreed.” But they’ve been spending weeks not deciding something. Do you think there’s something about the proposals that you and others are calling for that are more difficult for Facebook to grapple with? Alex Stamos There [are] two things going on. The first is there seems to be massive conflict at the highest levels of Facebook over this policy. You can see this because somebody will say something onstage and then a different executive will say something totally different. They also all seem to be leaking to the media. So I think what you’re seeing is there’s a conflict at the highest levels, and they’re deadlocked. And that is Zuckerberg’s job. I think this is a leadership failure. Peter Kafka Do you think there’s any reason why this would be trickier for Facebook than it would be for Google? Alex Stamos I think Facebook is the most afraid. Facebook has the best ad targeting on the planet. It is better than Google’s, it is better than Twitter’s. And so I think there is a fear that opening the conversation about custom audiences and lookalike audiences, which is the other thing that I think should be limited — is existential. Peter Kafka Explain what a lookalike audience is. Alex Stamos A lookalike audience often starts with a custom audience. So, back to the Toyota example, let’s say you take everybody who bought a Toyota Tundra in the last six months [and] you upload them to Facebook. Those people have already bought trucks, you don’t want to advertise to them. But you ask Facebook: “Go and find people who are like these 10,000 people who bought my product.” And then the Facebook AI goes and crunches and it says, “What is special about these 10,000 people versus the 2 billion other people on Facebook?” and then it creates an audience that’s a lot like that. Nobody can really explain what it’s looking for. It’s AI. Not super explainable. But it’s incredibly, incredibly effective at finding special aspects about those people that apparently makes them want to buy trucks. So that’s another tool that the political advertisers like. Peter Kafka And that allows Facebook to put targeted ads in front of [their users]. And they’re saying this is aggregated and anonymous, but still, it is showing up in front of you and not someone else. Alex Stamos Right. The advertisers don’t find out who the people are, right? So the privacy of the individual is protected. But this is really powerful for political ads. You take your donor list and you upload it, and tell Facebook to find everybody else who might want to be my donor. And then it automatically blows it out to everybody who looks like, demographically, the people that have already given to you, and then you ask those people for donations — that can be incredibly powerful. Peter Kafka If Facebook does what you want them to do and institutes some kind of targeting limits, what immediate effects do we see in this campaign? Alex Stamos The Trump campaign will have to change their tactics. The kind of automated generation, the A/B testing, they’re utilizing it much more [than other campaigns]. I think it will affect [Michael] Bloomberg, because clearly his model is, he’s gonna throw a huge amount of money into it. I expect it will completely change how he’ll have to campaign. You’ll probably see campaigns spend more money on higher-quality content and then show it to the same number of people, but it’ll be less targeted. And it will reduce the value of these big data [databases]. And so you’ll see a bunch of consulting companies lose those contracts. I think the campaigns will still advertise as much online. The truth is — let’s say you limited targeting to 10,000 people or even a congressional district, 600,000 — that’s still much more effective targeting than you can get through almost any other medium. Peter Kafka What problems do limits on ad targeting not solve? Alex Stamos Well, it doesn’t solve whether or not people lie in their ads. It doesn’t solve whether or not your ads are negative. Now, there’s some interesting research that shows that some online ads are not necessarily more negative than offline ads, that negative ads have been a problem the whole time. It doesn’t all of a sudden make Donald Trump honest. But I don’t think there’s anything that companies can do around that. Another thing I’d like to see from Facebook is a policy on accurate claims about your opponent. Where they don’t want to go on the truthfulness issues is that they don’t want to be the arbiter of political claims. So if Elizabeth Warren says “I can give everybody Medicare-for-all without raising middle-class taxes,” they don’t want other Democrats or Donald Trump arguing, well, “here’s an economic study that says it’s not true.” That’s politics, arguing over that kind of stuff. Peter Kafka No, Mexico’s not going to pay for the wall. Alex Stamos Yeah, exactly. Or Donald Trump saying the US economy has never been better. Well, Paul Krugman says it isn’t and therefore Facebook has to decide what is a political statement. We don’t want the companies making those decisions. That would be crazy to have these companies make those decisions. What I think they could have a standard on, though, is claims about opponents. So if you make a factual claim about your opponent, then that can get that checked. So, “economy’s never been better” does not get fact-checked. “Joe Biden is about to be arrested in the Ukraine” does get fact-checked. That’s what I’d like to see from Facebook. I’d like to see the ad targeting limit. And I’d like to see that really basic rule, which is not that hard to enforce because it’s not really an operational issue. The amount of organic content massively outstrips by tens of thousands of times how many ads are running. And so they’re already doing a bunch of work around checking ads to see whether or not they’re suppressing the vote and such. I don’t think it’d be that hard to add that statement, especially if they did the ad targeting limit, because there would be way fewer ads to review. Peter Kafka Maybe this is just so obvious that no one else is talking about it, but I’m curious why we’re not bringing up the fact that Donald Trump can say all sorts of untrue things on his official Twitter feed or on his official Facebook post, and that can be widely distributed. And it’s not a paid ad. Do you think the problem of targeted ads or misleading ads is a much bigger problem than that stuff that’s organic, or do you think these are all things you need to solve? Alex Stamos I think there are two big differences with ads. First: you’re trading money for reach. I think there’s something we have for a long time recognized in the United States, that there’s a difference around paid speech versus organic speech. Second: Ads are one of the only ways you can put content in front of people who have not asked to see it. You’re able to insert yourself into somebody else’s day, who has never demonstrated that they want to see your content. For organic content, if you’re seeing stuff in your News Feed, it’s either because you followed one of those pages or you’re friends with somebody who’s sharing. You have lots of tools, that if you don’t want to see that content anymore, you can [stop it]. With ads, people are inserting themselves into your life. I think there is a fundamental difference in how we treat free versus paid speech, to be sure. Peter Kafka After the 2016 election, there was initially a lot of focus, and rightfully so, on Russian interference in the election, and then other state actors. That was sort of your focus, I believe, at Facebook. Now, we’re having this discussion on political advertising, and no one is connecting this to Russia or any other international actors. Is that because it really isn’t an issue now? Alex Stamos Even in 2016, the number of Russian ads was minuscule versus the spending by US political parties. Just in the last couple of days, there’s been new quantitative research that shows that the ability for the Russian Internet Agency [the government unit accused of illegal interference in the 2016 presidential elections, known as IRA], which is the side that uses advertising to affect people’s political positions, is actually incredibly limited. So the quantitative evidence is that Russian interference in the form of trolling and ads was probably not bad. The most effective component of Russian interference was most likely the [Russian military intelligence agency known as the] GRU campaign, which we just released a big report on, which was the breaking in, stealing of email, creating narratives, and then pushing those narratives via WikiLeaks, DC leaks, and then friendly media outlets. That totally changed the conversation people are having about Hillary Clinton. Whereas the IRA stuff — if you’re part of the Secure Borders Facebook group, you’re not a swing voter. The truth is, the amount of political content of that type that is pushed by Americans vastly outstrips what the Russians push. And I think that’s part of the discussion. Part of it is also that three companies have made changes here on online advertising. So really we only have a handful of companies that are doing any verification of who’s running political ads out of the thousands of other companies in the ad tech ecosystem. They’re doing pretty much nothing. And so there is the possibility of foreign ads, but I think just the size is so much smaller than what Americans are spending on it — it’s hard to argue that that’s the big deal versus the GRU activity which we are still very vulnerable to in 2020. Peter Kafka You mentioned Cambridge Analytica, sort of dismissively. You’ve mentioned this in the past — that you think we’ve sort of overblown, overstated, and misunderstood Cambridge Analytica. Can you explain in simple English to our readers, what we should and shouldn’t care about when it comes to that? Alex Stamos I think what we should care about Cambridge Analytica is whether or not we allow political manipulation to be targeted with personal information. The actual leak of data is minuscule compared to a number of other things that have happened since then, that nobody ever talks about anymore. Peter Kafka Facebook paid a $5 billion fine for its failings around Cambridge Analytica. So when you say there are bigger data leaks, what kind of things are you talking about? Alex Stamos For example, most of the major American telephone companies were caught selling people’s fine GPS location. That was a story for a week? Peter Kafka Yeah. Vice had that. Alex Stamos So it looks like people might have died because of that. That data was being sold to bounty hunters, who are using it to track folks down, right? Fine GPS location is about the most sensitive data you can possibly get from somebody’s phone. That’s never leaked, as far as I know, from Facebook or Google or any other major tech company. But if it comes from the phone companies, nobody talks about it. The Equifax breach was a massive, massive issue and it now means that the data of hundreds of millions of Americans is in the hands of probably the Ministry of State Security of the People’s Republic of China. And we’ve all kind of moved on. So it’s that versus what pages you “Liked” on Facebook. There’s just no qualitative or quantitative model where those breaches are not more important. Yet people don’t talk about this anymore. The importance of Cambridge Analytica is the political ad targeting. Because it turns out, you can run a company that’s much more effective than Cambridge Analytica, buying data from data brokers. And nobody seems to talk about that, which I think is a problem. Peter Kafka Can you offer advice for consumers, who aren’t tech-savvy, for responding to data issues and privacy scandals and breaches? Is there a practical way for people to think about this stuff when they’re trying to make decisions in their lives? Alex Stamos The No. 1 thing people should be worried about is their own personal accounts being taken over and used. The amount of damage to them of their password being stolen from a site and then reused to take over their entire digital life way outstrips any damage from any of the massive data breaches. Or in situations where it is a mass password breach, that is the practical effect. So going to Have I Been Pwned and putting in your email account; using a password manager; having different passwords on everything; that should be an individual’s focus. Because the truth is the media does not properly contextualize data breaches, because they do so based upon which companies they dislike much more than the actual practical effect. There’s way more impact on individuals when passwords get leaked, and then their accounts are taken over because then they end up sending financial fraudulent scams to their family members; they end up with ACH [automated clearing house] transfers being initiated from their accounts; they end up with their credit cards being stolen. Those are the people who have a real humongous impact, when their passwords are stolen. That’s what I think most people can work on. There’s not much they can do about the Chinese having Equifax, but they can prevent organized crime from taking over their entire lives. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 5 days ago on re/code
Christina Animashaun/Vox Vox has a pretty typical privacy policy. That doesn’t make it great. If you’re reading this story on Vox.com, we have probably already collected quite a few bits of information about you. We, as well as our third-party advertisers, likely know which type of device you’re on, what browser you’re using, what you do on our site (which articles you read, how long you stay, what ads you visit), and what site you visit next when you click somewhere else. We know where you are based on your device’s IP address — a unique identifier assigned to each device connected to the internet — but don’t use GPS to actively track your location. We might be privy to more information, like information that signifies groups of people — details like age, income, interests, gender — but we don’t harvest or target that data. I’m telling you all this because as part of our Open Sourced project, we intend to explore the hidden consequences that various technologies — including ones we employ — have on regular citizens. We’ll be looking at things like Twitter’s privacy and free-speech policies as it begins to impose restrictions on political advertising; we’ll examine how Facebook tracks you around the internet; and we’ll explain how technologies like artificial intelligence are hoovering up vast amounts of data — and what they’re doing with it. Our goal is to explore and demystify the online world we all live in, to explain how algorithms work and what data you’re sharing with companies. And this means not only looking outward, but inward. So, let’s get back to Vox.com. If you reached us through social media — or even a device on which you’re also using social media — we also have access to portions of your social profile, such as your name, email address, and friend list. We collect all this information and combine it with other data from internet behemoth Google to get a demographic picture of who you are so that we can make our site better but, primarily, to serve you ads. We could, for example, sell an ad to Glossier that would land in front of wealthy women in their 30s who frequently purchase cosmetics. If you click to play an Open Sourced video on our YouTube channel, you’re also subject to Google’s privacy policy. And if you happen to be signed in with a Google account like Gmail, Google can collect even more info. We — like most publishers — also use Google to sell, manage, and track ads across our sites. Part of what makes that partnership so attractive to publishers is all the information Google has about you and is willing to share. The third-party advertisers on our site do the same thing but do so programmatically, meaning through an automated, auction-based system to sell the rest of the ad space on our site. If you buy something through certain links on some Vox Media sites, we can calculate how much is spent — though not your individual purchase — in order to calculate a revenue share percentage with the affiliate seller. How ad-supported journalism works We are an ad-funded publication: Advertisements help pay my salary, support our journalism, and keep the lights on. The more detailed a picture an advertiser can get of who they’re reaching, the more they will generally pay. Vox Media — the parent company of Vox, Recode, and Open Sourced — does not outright sell data about you for money, but we do sell access to you. Put another way, we tell advertisers that we can put ads in front of you and then track for them how these ads perform. We also share your data with third parties we pay to provide services that require that data, like, for example, providing user analytics. In turn, those third parties are contractually obligated not to reshare your data. It’s a lot — I get it — but the net result is that you, dear reader, get to read our content without a paywall. “These are not documents for end users; they’re documents for lawyers and regulatory authorities” We broadly spell it out for you in our privacy policy, but to know that you’d have to go looking for it and read it — something most people (including me, before writing this article) don’t actually do. Just 9 percent of Americans say they always read a privacy policy before agreeing to it, while 36 percent never do, according to a new Pew Research survey. When you look closely, privacy policies in general can feel terribly invasive. Fortunately for you, Vox’s privacy policy is pretty normal for a media company. It’s also quite readable in the pantheon of intentionally arcane privacy policies. “I don’t think you guys are doing anything different than anyone else in the media ecosystem, but that doesn’t make it great,” Jennifer King, director of consumer privacy at the Center for Internet and Society at Stanford Law School, told me. We’ve looked at a lot of other media privacy policies. The New York Times, BuzzFeed, The Atlantic, Vice — well, basically every media company — collect varying levels of personal information when you visit their sites and apps, interact with advertisements, or sign up for subscriptions. They also share that information with third parties, who in turn collect their own data. King says the worst offenders are e-commerce sites that record your payment and other information even before you submit it and smartphone apps that require location data, essentially giving advertisers access to your address. Ad giants Google and Facebook by far know — and leverage — the most information about you. And companies’ long, dense, and ever-changing privacy policies give little insight for regular people about when this is happening and what exactly is happening with your data. As required by the Federal Trade Commission, we update users when we change our privacy policy. “These are not documents for end users; they’re documents for lawyers and regulatory authorities,” King said. “They’re not there to help typical users navigate what’s going on.” Government regulation of privacy policies is lacking While most sites have some sort of privacy policy, the content of those policies is largely unregulated. Basically, as long as the data websites are collecting is legal and users are informed of what’s being collected, it’s fair game. (Vox Media and many others read consent as simply going to our website, having had the opportunity to read our terms of use.) Unless you’re in an area like the European Union where it’s required by law, we don’t provide our own ways for you to opt out of tracking, but plenty of third parties do (more on that later). Under the EU’s General Data Protection Regulation, users see a banner with our privacy policy when they land on the site and must opt in to let us collect personal data. Some of this is changing thanks to the California Consumer Privacy Act, or CCPA, a new regulation that’s going into effect in January 2020. It’s keeping lawyers and developers at media companies across the country very busy. This law will allow consumers from California to opt out of the sale of their personal information, and export and delete any data that’s been collected. It doesn’t stop sites from tracking you, however. To actually stop these sites from tracking you, you will still have to use third-party products. The law also potentially broadens the definition of what “personal information” is and what “selling” that data means. It describes “personal information” as “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” The definition of “selling” includes “renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer’s personal information by the business to another business or a third party for monetary or other valuable consideration.” What exactly a “valuable consideration” is isn’t clear, but it potentially could include letting third parties place cookies on our site and sharing information with them that could result in our overall ad value increasing. Despite CCPA applying only to Californians, the law will require a substantial amount of work for both Vox and our third-party partners. Here’s an abbreviated list from our legal department of what actions we’re taking: Reevaluating all data practices across all our sites, including recent acquisitions Updating our existing privacy policy to include the CCPA Developing a new, simpler process so that people can opt out, access, or delete the data we collect Assessing whether data we exchange with partners is “personal information” according to CCPA and whether that exchange/transfer would be considered a “sale” Developing a method by which consumers could retrieve data from partners, request deletion, and opt out of sales The fact that legislation is arising at the state level — California, Nevada, and potentially New York — means these policies could become even more piecemeal. However, it’s likely that companies like ours will seek to be compliant with the strictest policy, for simplicity’s sake. What can you do to better prevent sites from using your data? This is all good news for the privacy conscious. But what should you do if you’re still uncomfortable that sites like Vox have access to your data? Get off the internet. But, more practically, here are three things you can do. Update your settings on the web products you use like your browser, social media, and email clients. “They almost always have options to opt for higher privacy settings, but by default they’re usually set to a lower restrictive setting, so they can generate more profit off each user,” Daly Barnett, a staff technologist at the Electronic Frontier Foundation, told me. You can set your browser to “do not track”; however, Vox, like many other sites, chooses not to acknowledge that request. You can also use browser options like Chrome’s “Incognito” mode to keep your data from the browser, though your activity is still visible to the sites you visit. Download a privacy browser extension. Barnett recommends a nonprofit product she works on called Privacy Badger, as well as uBlock Origin, AdBlock Plus, Ghostery, and Noscript. Barnett warns, however, that there’s an “arms race” between the blockers and the tracking companies, with each responding in turn to developments by the other. If you live in California, as of January you can request to see or delete the data we and other websites collect. Open Sourced is going to make readers a promise to be as transparent as possible. We can’t fix everything, but we can help you be better informed about the decisions you make online — even if you don’t realize you’re making them. Open Sourced is made possible by the Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read More...
posted 7 days ago on re/code
Boston: one of five cities where most of the growth in innovation jobs is happening | Billie Weiss/Boston Red Sox/Getty Images As a result, wealth is also being concentrated in those areas. Technology jobs and the economic prosperity they bring are being concentrated in fewer US cities, according to a new report from The Brookings Institution. Since 2005, five metro areas — Boston, the San Francisco Bay Area, San Jose, Seattle, and San Diego — accounted for 90 percent of all US growth in “innovation sector” jobs, which Brookings defines as employment in the top science, technology, engineering, and math industries that include extensive research and development spending. Meanwhile, 343 metro areas lost a share of these jobs in that same period. The result: Wealth and productivity are becoming even more concentrated in fewer, primarily coastal cities. One-third of the nation’s innovation jobs resides in just 16 counties; half are concentrated in 41 counties. These jobs are high-paying and contribute to overall faster wage growth in the areas they’re located, than in areas with fewer innovation jobs. They also result in a lot of secondary work — jobs created to help serve those workers. These locations draw educated people and investment money from other places. Some 40 percent of adults have Bachelor’s degrees in the top 5 percent of metro areas with innovation job concentration, compared with 26 percent in the bottom three quartiles. As the report stated: “These places enjoy the benefits of what economists call cumulative causation, through which their earlier knowledge and firm advantages now attract even more talented workers, startups, and investment, creating a gravitational pull toward the nation’s critical innovation sectors while simultaneously draining key talent and business activity from other places.” Being an innovation city does have costs: These include worsening traffic, ballooning housing prices, and wage growth so high that smaller firms can’t compete. In theory, these spiraling costs should send jobs to cheaper areas, but the report notes that the inflection point is very high, and that when a company does move, its jobs don’t necessarily stay within the US. The disparities between so-called innovation cities and those with declines in innovation employment aren’t because small and midsize inland cities like Kansas City and Des Moines don’t have tech aspirations and aren’t trying to grow and innovate. Rather, the very nature of tech leads to the divide. Tech companies need a lot of tech workers, and they need a lot of other tech companies to help support their operations. Over time, these places develop the necessary infrastructure — broadband, public transit, high quality of living — for continued innovation. “Tech has a strikingly strong dependence on network effects or agglomeration effects, and therefore has winner-take-most dynamics,” Mark Muro, senior fellow and policy director at Brookings’s Metropolitan Policy Program, told Recode. “It has become so efficient to have clusters of sophisticated activity workers in one place that the rich tend to get richer in these economies.” The Brookings Institution This is a problem unique to the technology age. “The traditional manufacturing and natural resources economy didn’t work at all this way,” Muro said. “That economy initially revolved around the price and location of resources— rivers, bays, forests, or highways — and that dictated geography.” In time, regional disparities started balancing out as manufacturing and corporate functions — and people — moved to less expensive areas like the Southeast, which could then “catch up.” In other words, you could get the same things and have the same jobs in cheaper places, so people did so, and the nation’s economic divides narrowed. However, now that tech is a dominant and growing industry in the US, it’s having the opposite effect. As the report states, the “large benefits accrue to firms when they locate together in urban areas” and ensures that tech — rather than tending to spread out — further concentrates in urban areas. Brookings suggests intensive government investment — direct funding, tax preferences, workforce development — to stem future regional economic divergence. The report lists a number of areas like Madison, Wisconsin; Albany, New York; and Provo, Utah, that have existing assets like universities that could potentially make them future innovation hubs, but this will only happen if there’s a concerted effort. “As a nation, we need to be aware of the winner-take-most dynamics when we wonder why tech isn’t just spreading into the heartland, why it’s not naturally diffusing,” Muro said. “If we want that to happen, we’re likely going to need to take robust policy steps.”

Read More...
posted 8 days ago on re/code
According to Uber, 99.9 percent of its trips are safe. | NurPhoto via Getty Images Ride-share drivers lack employment protections that could keep them safer. More than 3,000 people were sexually assaulted during Uber rides last year. The disturbing numbers, released by the company on Thursday, have many concerned about the safety of using Uber and other rideshare apps — especially since the data came out the same week that 19 women sued Lyft, saying drivers for the company sexually assaulted them. But it’s not just passengers who are being assaulted. According to Uber’s data, 42 percent of those reporting sexual assault were drivers. “Drivers are assaulted as much as the passengers are,” Michael Bomberger, an attorney who represents the women suing Lyft, told Vox. Passengers may be more vulnerable to certain types of assault. For example, Bomberger said he has heard from multiple riders who fell asleep in an Uber or Lyft and woke up to find a driver attacking them. And 92 percent of people who experienced sexual assault involving penetration were passengers, according to reports to Uber. But drivers — especially the roughly 19 percent of Uber drivers and 30 percent of Lyft drivers who are women — often report being groped by passengers, Bomberger said. And since they are independent contractors for gig-economy companies, they lack some of the protections available to taxi drivers and other workers, like partitions separating them from passengers. While advocacy groups have praised the Uber report for drawing attention to the issue of sexual violence in its cars, the report also highlights something more widespread: For workers around the country, sexual misconduct is a workplace safety issue. And though both Uber and Lyft have promised reforms, there’s evidence that for some, the platforms may not be safe places to work. Passengers aren’t the only ones being assaulted during Uber rides. Drivers are also in harm’s way. Uber released its first-ever United States safety report on Thursday, detailing sexual assaults, killings, and accidents on its platform. According to the report, 3,045 sexual assaults were reported during Uber rides last year, along with 9 murders and 58 people killed in crashes. The company says the overwhelming majority of its trips are safe — 99.9 percent have no safety-related issue at all, according to the report. But it is releasing the data because “we believe that for too long, companies have not discussed these issues publicly, particularly those relating to sexual violence,” the report states. The document comes at a time of increased attention to sexual assault during Uber and Lyft rides. In September, a woman sued Lyft after she said a driver kidnapped her at gunpoint, drove her across state lines, and along with two other men raped her, the Verge reported. When she reported the incident to Lyft, the woman said the company “​apologized for the inconvenience that I’d been through” and said she’d still have to pay for the ride (the company said she had reported it as an indirect route rather than a sexual assault). Then, in November, a Connecticut woman sued Uber, saying that she was assaulted by a driver. And earlier this month, 19 additional women filed suit against Lyft, saying that the company failed to respond adequately to their reports of sexual assault. Much of the focus in recent months has been on assaults reported by passengers. But there’s also a growing awareness of the risks drivers face. According to the Uber report, 42 percent of sexual assaults on the platform were reported by drivers. Uber’s methodology makes the percentage a little challenging to interpret; except in the case of assaults that involved penetration, the company does not have data on who actually experienced the assault, meaning the driver could have been reporting an assault by one passenger on another. Still, the data makes clear that, as the report puts it, “drivers are victims, too.” To Bomberger, who represents over 100 women suing Uber and Lyft over sexual assault, the numbers aren’t surprising. In addition to calls from passengers, his firm has also heard from hundreds of drivers who have been sexually assaulted, he said. In general, he said, “the types of assaults that occur to the passengers are more intrusive on average,” such as rape or attempted rape while the passenger is sleeping. But he often hears from female drivers who have been fondled while driving by drunken male passengers, he said. Overall, “non-consensual touching of a sexual body part” was the most commonly reported type of sexual assault across all Uber rides, with 1,560 such reports in 2018. And the report likely only captures a fraction of actual incidents, since there are many barriers to reporting sexual assault, from fear of not being believed to blaming oneself for the crime. “Whatever these numbers are” in the report, Bomberger said, “more women are being assaulted.” Uber and Lyft drivers lack protections that could keep them safe from assault For drivers in particular, the Uber report raises issues of workers’ rights and safety. Commercial drivers in general, whether they drive for a cab or ride-share company, face a high risk of physical and sexual assault, as Lauren Kaori Gurley reported at Motherboard earlier this year. There is no nationwide data on sexual assaults against taxi drivers, but drivers are 20 times more likely to be murdered on the job than other workers, according to the Occupational Safety and Health Administration. However, Uber and Lyft drivers may be more likely to face assault than taxi drivers for a few reasons, Gurley reports. For one thing, they are more likely to be female — in New York City in 2016, just 1 percent of Yellow Cab drivers were women, compared with 19 percent of Uber drivers and 30 percent of Lyft drivers nationwide. While people of all genders experience sexual assault, women are more likely than men to be assaulted. For taxi drivers at least, some cities have instituted regulations to help keep them safer — like bulletproof partitions to protect passengers and drivers, or surveillance cameras to record and deter crimes, Gurley notes. But Uber and Lyft generally provide neither to their drivers, though Uber is piloting audio and video recording in some places. Uber and Lyft drivers’ employment classification may also play a role in their safety. The drivers are currently classified as independent contractors rather than employees, meaning they can’t get workers’ compensation or form unions to push for safer conditions. A California law scheduled to take effect in January could change that in the state, though ride-share companies are pushing back hard against it. “If drivers are considered employees, companies will have a stronger obligation to create a safe workplace, and so they’re much more likely to report these incidents and take measures to keep drivers safe,” Veena Dubal, a law professor at the University of California Hastings told Motherboard. “Not only do they have to pay for worker’s comp and health care, but they also have a legal obligation to create a safe workplace.” Uber and Lyft say they are enacting safety reforms that could help drivers. Lyft is planning to release its own safety report and recently launched a new feature to allow drivers to easily share their location with friends and family. “It is Lyft’s goal to make the US ridesharing industry the safest form of transportation for everyone,” a spokesperson for Lyft told Vox in a statement. Meanwhile, Uber has rolled out a number of features aimed at improving safety for drivers and riders alike, Sachin Kansal, the company’s head of safety product, told Vox. Those include an emergency button in the app that allows a driver or rider to immediately send information about the ride to 911 dispatchers. “When we think about safety from a product perspective,” Kansal said, “we think of all our users,” not just passengers. And anti-sexual assault advocates have praised Uber’s transparency in releasing the report. “Understanding the problem is an important step in the effort to solve it,” said Erinn Robinson, press secretary for the Rape, Abuse & Incest National Network (RAINN), in a statement on Thursday. “We’d love to see organizations in every industry, including educational institutions, make a similar effort to track and analyze sexual misconduct within their communities.” Uber is also partnering with RAINN and other groups on initiatives to help prevent assault, including educational materials on sexual misconduct for drivers and riders. Still, the disturbing findings in Uber’s report are a reminder that people face real risks when they drive for rideshare companies — and being classified as gig workers, rather than full-time employees, could leave them more vulnerable.

Read More...