posted about 1 month ago on gigaom
Marijuana is a natural candidate for experimentation — and not just the kind that leaves New York Times columnists in hallucinatory states for eight hours. Because it’s often grown indoors, and growing it legally is just becoming legal in a few states around the country, the plant is almost begging to be messed with. And, if those experiments go well, they could affect more than just Mary-Jane. That’s according to Fluence, a startup that builds LED-based lighting systems for legal cannabis growers and partners with researchers to study their impact. Want to know if a particular strain of marijuana grows best under one spectrum instead of another? Or how much money could be saved by switching from incandescent lighting systems to LEDs? Fluence wants to be the company to ask. The company, which was formerly called BML Horticulture, currently has to run all these experiments from a research lab in California — and rely on data from other growers and researchers — because its home city of Austin isn’t the most pot-friendly place in the nation. (The company could relocate, but right now it’s content with keeping an eye on everything with remote monitoring systems.) “We view this as just another plant. We’re looking at all crops that are considered high-value crops or crops that can better humanity, whether that’s lettuce, other types of leafy greens, or cannabis,” said chief executive Nick Klase. The company focuses on marijuana production because that’s where the money is, he said but it wants its research to affect other aspects of agriculture, too. Most of its cannabis-related research is meant to figure out how to grow more crop while using less electricity than other lighting systems. A report from 2014 said that growing marijuana accounted for $6 billion of the country’s electricity cost throughout the year. Installing more efficient LED lighting systems could have a tremendous effect on the amount of energy used by these operations. But Fluence wants to figure other things out, too. Klase told me that the company is experimenting to see if different colors of light affect plant growth, for instance, or if they promote the production of specific desirable compounds. “Our goal is to better humanity with this technology, so obviously that’s going to extend way beyond cannabis,” he said. “The nice thing is that most of the things effective on cannabis is relatable to other crops.” Sounds like a dream, right? There’s no denying the interest in growing plants other than marijuana inside. Countless reports have talked about indoor agriculture and how it’s become more popular in the last few years. This is because it’s seen as more efficient; because people are interested in buying locally-grown produce; and because indoor farming might offer a solution to problems wrought by climate change. It just seems to make sense, right? If tech companies make efficient LED lights, and it’s going to get harder to farm in many parts of the world, why not play god and grow something that might not otherwise succeed in a particular area? As it turns out, there are many reasons. Economics might be the most important to growers. There’s also the effect these operations could have on the environment. Louis Demont Albright, a professor emeritus of biological and environmental engineering at Cornell University, cites both factors as the main obstacles to indoor farming. “Just in today’s economic climate, if you buy enough light to raise wheat, you spend $18 for a loaf of bread just for the electricity for the wheat,” he said. Farmers won’t be able to handle those costs without assistance. All that light would also require growers to use electricity to power their lighting systems and the air conditioning used to prevent the whole thing from going up in flames. (That is not, as I understand it, what cannabis enthusiasts refer to as “lighting up.”) This would, according to Albright, have a worse effect on the environment than growing in greenhouses to take advantage of natural light. “The idea that’s being proposed is a production system that increases the carbon footprint by an order of magnitude,” Albright said in an interview, “and it makes no sense to me that you would solve a problem by making it worse.” According to his research it’s actually better for the environment to bring produce in from around the world than it is to grow it locally with large-scale indoor farming. Albright isn’t the only one who thinks that indoor agriculture might be wasteful. Some cannabis growers have even realized that growing all their crops indoors isn’t sustainable from a financial perspective. Here’s what Utah State University professor Bruce Bugbee told MIT Technology Review in the same report that covered the electricity used in the process of growing marijuana plants indoors: Eventually, as growing marijuana becomes more accepted, some farmers may turn away from grow houses altogether. ‘I’ve visited growers in Colorado who’ve grown cannabis for 30 years and have always grown it indoors,’ Bugbee says. ‘The most progressive growers have run the numbers, and instead of warehouses they’re starting to build greenhouses.’ The plants may still be sheltered, but they’re open to view—and to the natural light of the sun. There’s no denying the effect climate change will have — and has already had — on agriculture. And it’s the human way to think that technology can save the day. But if these professors and the many other researchers who agree with them are believed, indoor agriculture won’t be the panacea that some expect it to be. It’s more likely to be a short-term solution that will exacerbate a long-term problem. “I think agriculture will move. Wheat may move from Kansas or wherever up into Alberta if it gets a few degrees warmer,” Albright said. “But it’s still going to be grown on the land.” Indoor farming: Good for cannabis, not so good for food originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
Social networks are the overworked writer’s best friend. It’s easy to observe the latest outrage on Twitter, grab a few good jokes from Reddit, or screen cap the ridiculous things people write on Facebook and turn them into blog posts. Writers used to have to find stories to chase — now they just have to be willing to sift through gargantuan masses of shit to find a few nuggets of social media gold. There are a few problems with this: the people whose content has been lifted don’t always like someone else taking credit for their words, photos, or videos; relying on outside platforms can lead to the meat of a publisher’s blog posts falling right out of their sandwich of context and witticism; and social networks don’t need writers to surface their best content. They can collect it themselves. That’s what many decided to do this year. Reddit created a publication called Upvoted to highlight the stories that propagate on its service. Twitter introduced Moments to aggregate tweets about breaking news and entertainment alike. Snapchat got into the news business during the San Bernardino shooting. This was the year social networks tried to establish some control over social media. The reasoning behind this shift, as well as each company’s approach to it, has varied. Upvoted resembles a traditional publication that just happens to pull its stories from the Reddit platform. It’s designed at least partly to redirect some of the traffic that would’ve otherwise gone to other sites back to Reddit itself. But, as Gigaom’s Tom Cheredar wrote, it’s also meant to humanize the community: Right now, Reddit is viewed by advertisers with caution. The reasons for this are well-documented. But there’s no denying that Reddit is popular enough that you’d be crazy not to try and get in front of its audience. The problem is that it’s often hard to predict how the discussion will form on Reddit by its community, and that’s a risk many advertisers aren’t willing to justify should things go sour — deserved or not. Upvoted can soften those fears by enhancing the top submitted content on Reddit proper (as explained above). On other news sites that may credit a Reddit user for submitting a piece of content that gets written up in an article, usually there’s no desire to go beyond the user name. But doing so could help humanize the submitters, which might help advertisers overcome some of the negative characterizations of the overall Reddit community. Twitter’s Moments feature (not to be confused with the Facebook photo app of the same name) has a different motivation. It’s supposed to find the best tweets so people never have to wonder why they should visit Twitter. It’s also supposed to make it easier for new users to understand what Twitter is about — a way to distill the chaos into a manageable form so normal people can interact with it. But the implementation is very different from Upvoted. Moments doesn’t look anything like a traditional publication. Instead it looks like just another feature on Twitter’s navigation bar, making it harder to tell that serious editorial talent, like New York Times editor at large Marcus Mabry, are in charge of its content. Its team is a dedicated newsroom masquerading as part of the Twitter machine. Snapchat’s foray into breaking news took yet a different form. Its staffers gathered content shared to public “Stories” and made them available to anyone near the area affected by the San Bernardino mass shooting of December 2. Small updates about the investigation were written by these same staffers, but for the most part, the company simply shared what its users were experiencing. I argued that this approach, combined with the ephemeral nature of Snapchat’s service, is a refreshing departure from the majority of breaking news reporting: It’s easy for misinformation to spread on the web. Hitting “like” or “retweet” on a false report doesn’t require much effort — certainly less than it does to spend a few seconds looking for accurate information or sharing new info as it becomes available. That misinformation often remains until someone goes through and deletes it, which is another opportunity for someone to get the wrong idea about something, share that idea, and keep the perpetual ignorance machine going. Snapchat’s self-deleting updates don’t afford this opportunity. There’s no perpetuity. It’s a bit like talking on the phone with someone: Unless they’ve taken extra steps to record whatever was said, the information is passed along once before it disappears into the aether. The photo-and-video-based nature of the service also lends itself to eyewitness accounts, which limits the claims people can make. (Not that video or photo evidence on social media is infallible.) These are three very different approaches, but the underlying goal is the same: Gathering user-generated content before writers aggregate it themselves. So I’m left to wonder when other social companies will get around to creating their own publications instead of waiting for writers to swoop in, gather all the free content lying around, and turn it into something that could lead to millions of pageviews. There are some obvious contenders. Vine’s users already provide a glimpse into what’s happening during important events, so it would be trivial for the service to collect the best coverage and make it available to users. The same could be said of Periscope — instead of showing things in six-second loops, it offers live-streamed video. Twitter could editorialize both services without much effort. Another less obvious one might be Product Hunt. That site is like a gift from the tech journalist’s gods. (That is assuming tech journalists have gods willing to serve their — sorry, our — wretched souls.) Need to find something cool to write about? Go to Product Hunt! It’s got everything from software to podcasts, and many founders use the platform to answer questions about their products. Talk about manna from tech journo heaven. New products? Public statements? Links to the app store, animated GIFs, and ready-to-use images? Product Hunt is one dedicated “news” section away from putting a good number of tech writers out of their jobs. Let’s all take a moment to thank chief executive Ryan Hoover for sparing us from such a grisly end to our careers — at least for the moment. Aggregating content from social networks has created a weird loop that takes something from those networks, puts it on another website, and then inevitably shares it to the same networks and other platforms. (I, and probably many other Redditors, encounter many links to BuzzFeed stories containing jokes I read a week ago.) These efforts are merely the result of social networks closing the loop. This was the year social networks turned into news organizations originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How personal analytics can streamline business decisionsNew ways to map how businesses operateHow to move digital marketing beyond the “Like”

Read More...
posted about 1 month ago on gigaom
Amazon is making it seem like consumers signing up for its Prime service just in time to take advantage of free two-day shipping on last-minute gifts is a victory. But, much like its celebration of a record-breaking holiday shopping weekend in November, the company hasn’t offered many details about its boastful posturing. The disingenuousness begins with the company bragging that 3 million people signed up for its Prime service in the third week of December. That seems like a victory — Prime customers are far more likely to remain loyal to Amazon than shoppers who don’t want to pay around $100 per year for access to the service. But it doesn’t count the number of people who might have signed up for free trials — Amazon often pushes customers to give Prime a try — just so they could get free two-day shipping during the holidays. As long as those people cancel within the grace period, all Amazon really did was eat customers’ shipping costs. Even worse are the boasts Amazon makes about the number of devices it sold or how many people watched something via its streaming video service. Those claims, much like similar ones made after Thanksgiving, are represented by percentage increases that never provide a baseline for accurate comparisons. As I wrote when Amazon pulled the same stunt last month: Yet the fact remains that we have no idea what any of this actually means for the company. Just look at its claim that it sold six times as many Fire TV products this holiday shopping weekend as it did during the same weekend last year. Does that it mean it sold 6 million this year? How about 42 million? Nobody knows! The reliance on percentage increases wouldn’t be so baffling if Amazon didn’t get rather specific in other areas. The company knows how many timers its Alexa device set, the candy bought through its store, and what movie people watched on Christmas. (Over four million, gummy bears, and “Interstellar,” respectively.) Amazon was also willing to share information about the last holiday delivery it made — enough for anyone close to the person to identify them, provided some of the items were given away as gifts and the recipients happen to stumble across a press release touted by a large tech company. Here’s what Amazon said: The last Prime Now order delivered in-time for Christmas was delivered at 11:59 p.m. on Christmas Eve to a customer in San Antonio, Texas. The order included Blue Buffalo Dog Treats, an Amazon.com Gift Card, the all-new Fire tablet, Fruitables Dog Treats, LEGO Star Wars Death Star Final Duel Building Kit, Moleskine Classic Notebook and Stove Top Stuffing Mix. This means Amazon was willing to share more information about what a Texan procrastinator bought at the last possible moment before Christmas than about the devices it sold, the amount of time people spent watching videos through its video service, and how many of those Prime subscribers kept their memberships. And here I thought the creepiest part of the holidays was Santa’s omniscience. Amazon makes empty boasts about another holiday season originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
Facebook’s attempt to provide free access to some Internet services has hit a roadblock: The Telecom Regulatory Authority of India has told the company’s wireless partner, Reliance Communications, to halt its support of the program. At issue is the idea that providing free access to some services but not others violates the principles of net neutrality, which basically asserts that Internet providers shouldn’t be able to charge more or less for access to specific websites. Those concerns have surrounded the Free Basics service affected by this request ever since the Internet.org initiative started rolling it out earlier this year. It even lost a number of high-profile partners worried about its potential ramifications. Facebook chief executive Mark Zuckerberg responded to those concerns in a post on his public Facebook page. “If someone can’t afford to pay for connectivity,” he said in a status update, “it is always better to have some access than none at all.” But those concerns weren’t limited to India. Later, a chorus of activists from Latin America led the Electronic Frontier Foundation to ask if Internet.org leaves people who rely on the service without legitimate access to the Internet. Here’s the crux of the activists’ and the EFF’s argument against Internet.org: It is true that Facebook is not the only property made available through Internet.org. The free bundle includes open resources such as the excellent Wikipedia. But the problem runs deeper than simply which sites to which poor users should have subsidized access. It lies in the very concept that Facebook and its corporate partners, or governments, should be able to privilege one service or site above another. Despite the good intentions of Facebook and the handful of allied companies, Internet.org effectively leaves its users without a real Internet in the region. Now it seems that the Indian government has similar questions about the effect Internet.org might have on the free Internet. As an unidentified source told the Times of India when it first reported on TRAI’s request for a halt on the service: “The question has arisen whether a telecom operator should be allowed to have differential pricing for different kinds of content. Unless that question is answered, it will not be appropriate for us to continue to make that happen.” It’s not clear how long the Indian government will take to examine the issue. But at least one thing is clear — the battle to decide whether it’s better to have free access to a limited Internet, or costly access to a free Internet, is far from over. Facebook’s Internet.org stumbles in India originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
Google isn’t content to let Facebook dominate the messaging market in the West. The Wall Street Journal reports that the company is working on a platform that will allow consumers to message assistive “chatbots” as well as real-live humans. Details about the service are scarce. A name wasn’t revealed, for instance, nor was a timeframe for when consumers might expect to be able to use the app. But the report did reveal that Google’s been working on the product for about a year. Including the chatbots will make this new service different from Hangouts, Messenger, and the other communications platforms Google has introduced. (Anyone remember Wave, the company’s short-lived real-time messaging tool?) The chatbots, according to the Journal’s report, will allow people to send a query to an automated tool that “will scour the Web and other sources for information to answer a question” much like the question-answering function of Google Now. This shouldn’t come as a surprise. Google’s strength is its ability to answer questions, whether it’s through a search engine or a virtual assistant, and flexing that muscle to popularize a messaging app would make sense for the company. It would also let Google compete with Facebook’s M, a partly-automated tool that uses a mix of artificial intelligence and human workers to answer questions, doodle, find information, book appointments, and perform other functions. If Google could use its artificial intelligence prowess to provide a service similar to M without requiring humans to perform any tasks, it could give the company just what it needs to compete with Facebook Messenger’s growing dominance. And, with both of these companies working to create messaging apps that don’t restrict people to communicating with other humans, the combined force could help messaging services become the central hubs of consumers’ digital lives. Google’s working on a chatbot-filled messaging service originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
Remember Ello, the social network that was first portrayed as the anti-Facebook well over a year ago? Well, it’s still around, but the anti-Facebook framing is something it never should have been billed as, according to the founders. Sure, there are some aspects of the service that make it seem like a response to the world’s largest social network. It started out small. It’s promised never to display advertisements, which means it doesn’t track its users around the Web. And it’s based around the idea of communicating with other people which, due to the lens through which we view the Internet, makes it a Facebook competitor. But it doesn’t matter that Ello wasn’t meant to compete with Facebook. That’s how the service was perceived, and it was called the “anti-Facebook” so much that it became the service’s tagline in the mind of the general population. (Well, the portion of the general population that reads tech journalism, at least.) It’s also part of the reason chief executive Paul Budnitz stopped talking to the press. “Someone, somewhere had called Ello a Facebook killer, and there was just all this hype in the news and I had basically every VC in the country trying to talk to me,” he said in a recent interview. “For most people that’s really an awesome thing and I guess what every startup wants, but for us everyone was coming for something that I didn’t want to build and we really had no interest in building.” It didn’t matter how often Budnitz said Ello wasn’t taking on Facebook — the story had taken on a life of its own. Investors were calling with hopes of getting in early with the so-called Facebook killer. Consumers were flocking to the site in search of an alternative to that most polarizing of social networks. And writers, like me, were interested in the company mostly because of a false narrative. Eventually the press stopped. Ello didn’t kill Facebook within a few months, its founder wasn’t giving interviews, and relatively few people used the service. An analyst for App Annie told me that Ello “essentially is so small it doesn’t and can’t compare to” Facebook, Twitter, Snapchat, and other established social networks or “anything that could be defined as an up-and-coming social app.” Ello wasn’t even of much interest to researchers. Jason Mander, the director of research at GlobalWebIndex, told me Ello was included in just one of the firm’s quarterly surveys about consumer Internet usage. It wasn’t included in following surveys because it was of little interest to the firm’s clients and respondents. Ello seemed to have been forgotten by everyone outside its relatively small audience. That didn’t stop the company from continuing its work. I reached out to its press team shortly after I started at Gigaom on a lark. Mostly I expected it to email me every once in a while with a product update, or user data, or the other innocuous things most social networks use to garner attention. It did none of those things. Instead, it sent me the same emails its users get about new features or changes. “I’d do these interviews with these really nice people and they’d put this stuff up like ‘When is Ello going to switch and start running ads?’ and ‘You’re really not going to go for the billion dollars right now?’ So we just felt like we weren’t getting through the noise,” Budnitz told me. “One of the reasons we’re finally doing interviews is that, if you go on Ello, it’s actually really, really awesome.” It’s also focused on inspiration, as Budnitz puts it, instead of social networking. People aren’t using Ello to connect with high school classmates — they’re using it to share the images, blog posts, and graphic designs they’ve made or discovered. All of the company’s focus over the last year has been on furthering that mission and giving users a place to connect with like-minded people around the world. That’s part of the reason why Ello doesn’t have ads. Sure, part of it’s because Budnitz and his co-founders think the way Web ads work is kind of creepy. But the other part is that most advertisements would disrupt the look of the site. “Beautiful photographs look really crummy next to ads for car insurance and tortilla chips,” he said. So the company doesn’t, and indeed can’t, show ads. So what is Ello? “The basic thing that we’ve been building is a safe and positive community where creators publish, share, and eventually sell inspiring work. It’s really a place for people who make things to inspire one another,” Budnitz said. “And really it’s not just high-end professionals and designers and all that stuff. I would say we have all types of people, amateurs, professionals, you name it.” That positivity is enforced by a full-time support staff, features that give Ello users granular control over who can see what they post, and its small audience. Visiting the site feels less like signing on to a social network and more like stepping into an art gallery where people who don’t know each other gather around, look at a specific work, and then discuss it in a cool-but-congenial way. Soon it will be a little different. Budnitz said the company plans to introduce a commerce portion of the service that will allow creators to sell things to other users. It also plans to introduce a version of the site that doesn’t require people to sign on to view work — which should go a long way towards increasing its visibility — and to (finally) release an application for Android smartphones. But perhaps the biggest change will be the ability for Ello users to post content to other social networks through the platform. This could make it something akin to a central management tool that allows people to share things on Ello first, thus giving them access to what’s described as a supportive community filled with talented people, before sharing them with the masses on other networks. “Our research shows that one of the most popular reasons for using social networks is because people’s friends are on them too. I think that’s why Ello struggled to attract a critical mass, because people tend to join when they perceive lots of their friends to be using the service too,” Mander said. “However, multi-networking is widespread. Globally, the average internet user has accounts on over 6 networks (rising to 7 among 16-24s). So, there’s certainly scope for Ello to sit alongside other services, even if its users are still engaging with other platforms too.” Ello has raised around $10 million, and Budnitz said its team remains small so it can keep costs down. The commerce features will help it monetize. It probably won’t ever see the kind of success that other networks have (here I go thinking about Facebook again) but it could be a sustainable business. If anything that makes it more interesting than if it were an also-ran that died battling Facebook. The company might never escape the idea that it’s the anti-Facebook. That’s certainly the perception I had of the service when I started researching this post. And I’ll confess that even now the cynic in me can’t help but wonder if it really was meant to take on Facebook but pivoted once the hype died down. Ello will be fighting this perception for a long time. Budnitz is okay with that. As he told me: “We have time.” Ello, the startup formerly known as the anti-Facebook, grows up originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How personal analytics can streamline business decisionsNew ways to map how businesses operateHow to move digital marketing beyond the “Like”

Read More...
posted about 1 month ago on gigaom
Sinclair is CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service. It seems like every conversation related to cloud-native software projects these days involves microservices. During those conversations, someone inevitably draws a comparison with service-oriented architecture (SOA) or hesitantly asks the question, “Aren’t microservices just SOA?” While it might not seem important on first glance, this is actually a pressing question that gets little attention. Usually this question is either outright dismissed in the negative or unquestionably accepted in the affirmative. As an exercise in more deeply answering the question, let’s spend time a little time understanding SOA and microservices independently and then comparing. In the early 2000s, service-orientation became a popular design principle. Driven by backlash against highly coupled, binary-oriented systems, service-orientation promised significant increases in flexibility and compatibility. Microsoft’s Don Box was one of the first to truly spell out the guiding principles of SOA, captured in four simple tenets: Boundaries are explicit Services are autonomous Services share schema and contract, not class Service compatibility is based on policy By adopting a service-oriented architecture that adhered to these tenets, one could unlock the value in SOA. Very quickly the world’s top software vendors capitalized on the opportunity and began building platforms and technologies to support the concept. In fact, the SOA movement became almost entirely a vendor-driven paradigm. Vendors scrambled to build middleware to allow developers to build SOA components that could be delivered and managed in the context of those four tenets. That middleware, in many instances, became bloated. Moreover, industry specifications that defined things like SOA schemas and policy management also became bloated. This bloat resulted in heavyweight components and a backlash by developers who viewed SOA as a cumbersome, unproductive model. In the mid-2000s, cloud infrastructure started gaining steam. Developers were able to quickly standup compute and storage needs and install and configure new applications to use that infrastructure. Additionally, applications continued tackling new levels of scale, requiring distributed architectures to properly handle that scale. Distribution of components forced segregation of application logic based on functionality. That is, break up an application into smaller components where each component was responsible for specific functions in the app. This ability to instantaneously call-up infrastructure coupled with the propensity for developers to use distributed architectures prompted individuals to think about formalizing thoughts for a framework. Microservices became a concept that embodied much of this and more. It would seem that the backstory for microservices satisfies tenets 1 through 3 (although 3 is a bit more relaxed in microservices since a REST API wouldn’t typically be considered a strict contract), making microservices very similar to SOA. So how is that different than SOA? Microservices, as originally conceptualized by Martin Fowler and James Lewis, extend expectations beyond how an application is partitioned. Microservices as a pattern establish two other important tenets: Communication across components is lightweight Components are independently deployable These seemingly small additions to the criteria defining microservices have a drastic impact, creating a stark difference between the microservices and SOA. Tenet 5 implies that complex communications buses should not be used in a microservices architecture. Something like an enterprise service bus (ESB) under the hood would create a large, implicit system dependency that would, by proxy, create a monolith of sorts since all the microservices would have one common, massive dependency influencing the functional end state. Tenet 6 means that deployment monoliths are not allowed (which is something that was common in SOA). Each service should carry its isolation all the way up the SDLC to at least deployment. These two tenets ensure that services remain independent enough that agile, parallel development are not only possible, but almost required. While SOA meant that logic was divided into explicitly bounded components for the same application, microservices’ independent deployability means that the components need to be for the same application at all, and may each be their own independent application. SOA set the tone for the fundamental architectural concepts embedded in modern microservices, but didn’t go far enough to create a powerful model that would solve the problems associated with bloat and speed of development. Microservices principles have a huge impact in how we think about the software development process and are not just a prescription for the architectural outcome. Thus, microservices can create a better outcome than its SOA predecessor. Are microservices just SOA redux? originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Best practices and technologies for cloud API management and governanceHow devops can extend velocity and visibility to the entire enterpriseSecond-generation cloud architecture: breaking the application silo

Read More...
posted about 1 month ago on gigaom
Bitcoin and other cryptocurrencies are already starting to shake up the financial services industry. They have also got entrepreneurs thinking about other applications for the blockchain technology that underlies them, including ones that address various processes inside non-financial companies such as contracts, audits and shipping. The digital signatures that certify each transaction and the distributed, write-only online ledger that constitute the core of the blockchain tech have the potential to offer even more security in these and other areas than more traditional approaches used by businesses. Blockchain isn’t the only game in town either. The Linux Foundation recently revealed that it is leading an open source effort to develop an alternative to bitcoin’s underlying tech. The initiative, which has been dubbed the Open Ledger Project, is being supported by a coalition of leading financial services and tech companies, including Wells Fargo, State Street, the London Stock Exchange Group, Cisco, Intel, VMware and IBM. IBM, which has been a driving force behind the project, is reportedly contributing many thousands of lines of code to it as well as considerable developer resources. The new kid on the block will have some catching up to do with blockchain, which is already being employed in some innovative ways. Nasdaq OMX, the parent company of the NASDAQ stock exchange, wants to use the tech to oversee trades in the stock of private firms and the Securities and Exchange Commission recently approved a plan by Overstock.com that involves the online retailer issuing stock using blockchain technology. Startups such as Digital Asset Holdings and Coinbase are also looking to profit from growing interest in digital tracking and trading using the new approach. The firms that gain traction here will get plenty of attention. Investment banking firm Magister Advisors thinks that financial institutions will be spending a total of over $1 billion on blockchain-related projects in 2017. And finance is just one industry where the new technology could drive significant change. In the music world, startups such as PeerTracks and Bittunes are aiming to use it to revolutionize the way music is bought and shared. And in the art world, Verisart is harnessing the blockchain to improve the way art is secured and verified. Looking at enterprise markets, there is a huge opportunity to apply blockchain technology or other variants in any place that involves swaps, trades or exchanges. One of the most obvious applications is in contractual situations where there is a need for proof that various parties are committed to a transaction. Companies such as Block Notary and Bitproof are developing ways to bind digital signatures into the blockchain and some firms are also experimenting with the technology to create escrow contracts that hold money on account until mutual agreement is recorded. Another area where I expect to see more activity using blockchain technology is in auditing. Deloitte is one of a number of professional services firms that is experimenting with distributed digital ledgers. Here, transactions can be posted into a blockchain, which would apply a timestamp and act as a repository. Typically, auditors only choose a sample from a set of transactions to check; but using the new approach, it may well be possible to verify a much broader range of transactions securely and cost-effectively. There are a lot of regulatory issues still to be ironed out, but the opportunity to provide certainty with significantly less friction is a compelling one. There is also a big opportunity to use the technology to improve shipping and supply chain management. An example of a startup here is Thingchain, which is applying a bitcoin-inspired cryptosystem to multiple use cases, including proving the provenance of goods and who owns them. Many companies are still learning about the potential of blockchain technologies, so it may be some time before we see broad adoption beyond finance. But the potential is significant—and not only in the areas that I’ve outlined above. Entrepreneurs are already exploring enterprise applications that cover everything from patent registration to recording the results of boardroom votes. Expect to see more and more businesses joining the blockchain gang in 2016 and beyond. Martin Giles is a partner at Wing Venture Capital (@Wing_VC). He was previously a journalist with The Economist. Blockchain, its new rival, and their future in the enterprise originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Bitcoin: why digital currency is the future financial system4 frameworks for understanding industry changeIoT Hardware Opportunities

Read More...
posted about 1 month ago on gigaom
Flipboard is releasing a slew of features to make its content discovery service more attractive to the publishers whose content appear in its apps and website. In doing so, it’s also indirectly responding to the threat posed by Facebook’s Instant Articles and other services that find blog posts for their users to read. The first of these features is the ability for publishers to create profiles where all of their content will be posted. Previous versions of the service, which debuted to much fanfare alongside the original iPad, were limited to a series of feeds that had to be discovered via its search tools instead of a single easy-to-find place. Publishers can manage their profiles with custom logos, designs, and many of the settings that can be tweaked on essentially any platform. Flipboard will also work with publishers to verify these profiles, making it easier to tell if a page is actually run by a paper like the Washington Post or if it’s managed by impostors. In addition to these profiles, publishers will now be able to include “end cards” that link to their other content in the last “page” of their stories. Now, instead of having to worry about readers abandoning them whenever they finish a story, these content providers will be able to at least attempt to keep their interest. Flipboard’s head of partner platform products, Jack Mazzeo, said that early tests showed these new end cards increase click-throughs by 15 percent. “We think publishers will really find that valuable,” he said, adding that Flipboard wants to “improve reader engagement” with the publishers with which it has partnered. Both of these changes make Flipboard more competitive with Facebook’s Instant Articles, which were recently updated with similar features allowing publishers to link to whatever content they’d like at the end of their stories. The new profile pages resemble the central hub for content around which Facebook is organized. Another change is Flipboard’s new support for Google Analytics and ComScore. Both are supposed to make it easier for publishers to sell ads against traffic in Flipboard’s mobile applications, which were previously measured by the company and detailed in monthly reports instead of up-to-the-minute updates. Mazzeo said publishers requested both integrations. The update is meant to serve two purposes: for “larger publishers to have more real-time visibility into their traffic on Flipboard” and for “smaller publishers who want to understand how much traffic” they get to decide if they want to invest more in the service. All of these efforts follow a report from the Wall Street Journal which claimed that the company was “floundering” because one of its co-founders left; its ad rates reportedly fell by half; and acquisition talks with Twitter fell through. Chief executive Mike McCue dismissed the report in a later interview with Fortune. That dismissal was echoed by Flipboard spokeswoman Christel van der Boom. During the interview with Mazzeo, she said that the Journal’s report was based on “anonymous sources” and that Flipboard recently had the best quarter in its history. She later said she didn’t have the exact numbers to back up that claim. Either way, and regardless of how Flipboard has positioned these new features, it’s clear that the company is moving past its beginnings as a social aggregator. Is that because Facebook removed the need for apps like Flipboard with Instant Articles, or because the company’s experiencing a turbulent period in its history? It doesn’t really matter. The result is the same: An experience that relies less on the hodgepodge nature of Flipboard’s core service and makes the service more like a collection of traditional magazines. It’s happy. Publishers are happy. Now it’ll see if users will be happy, too, and that’s when the real fun will get started. Flipboard tries to keep publishers happy with profiles, ‘end cards’ originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
Google’s self-driving car division will soon become an independent company under the Alphabet umbrella, according to a report from Bloomberg Business. Alphabet is the corporate behemoth Google turned itself into earlier this year so it could have more freedom to experiment with businesses unrelated to the Web without facing pressure from shareholders to monetize the many experiments. (Google itself became a wholly-owned Alphabet subsidiary during this shakeup.) The self-driving cars would join Verily, the longevity-focused Calico, and YouTube as Alphabet’s standalone companies. These companies will be expected to experiment with new ideas like startups while conducting themselves like real businesses that can’t rely on venture capitalists to help keep them afloat. Such a move could motivate the self-driving car division to test a ride-hailing service in the cities where its vehicles are allowed to operate. Along with the news about the division’s looming independence, Bloomberg Business also reported that such a service could debut in cities like Austin and San Francisco. A service like that would put Google in competition with Uber, which also plans to replace drivers with self-driving cars. That could be weird: Google Ventures invested in Uber, and even though it’s technically separate from Google proper, it would still seem like the firm was competing with a member of its portfolio. Making the self-driving car division separate from Google could help reduce the awkwardness. It would also make it easier to trust the vehicles not to gather personal information to feed into Google’s advertising network, give the division more freedom, and let Alphabet expand its growing empire of small businesses. So I suppose the surprise isn’t that Alphabet would spin off this division and separate it from Google. That’s why the restructuring occurred in the first place: To allow Google’s founders to experiment with new ideas while allowing Google to focus on its strengths. It’s more surprising that the split has taken this long. Google’s self-driving car division to become Alphabet company originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
Security tools are only useful if their warnings are heeded. Yet one of the culprits behind the infamous 2013 data breach at Target was the company’s decision to ignore its own alert system. The result: Tens of millions in fines, the compromise of 40 million shoppers’ credit card data, and the departure of its chief executive. Eastwind Breach Detection is emerging from stealth with a software-as-a-service tool that does its best not to be ignored. The post-breach detection software will send alerts to anyone who’s supposed to receive them — incident responders, a company’s leadership team, the IT department — until the problem is addressed. “The interesting thing about [breaches at Home Depot, OPM, and Target] is that the alerts fired just became part of the noise,” says chief executive Paul Kraus. “If our systems don’t see a change in behavior we’ll alert again. And we send out an insight report the day the breach is identified and then again every week after.” Eastwind also offers context around the breach. Instead of holding information for a few days before trashing it, the company monitors its customers’ data for 200 days to offer an idea of what happened before, during, and after a breach. This data is then collected and shown in the weekly reports sent to its customers. It’s a bit like marrying your high school sweetheart: This person knows what happened before any problems occurred, watched them take place, and will presumably be around to make sure the issue is taken care of. (Trust me on this one.) Eastwind is meant to remain constantly vigilant, and its memory is long. The company has other features that are supposed to differentiate it from its competitors, including a mobile application people might actually want to use; a service that can operate on Eastwind’s cloud or other tools like Amazon Web Services; and the ability to detect when a breacher has stolen any information. But perhaps Eastwind’s greatest strength is that it was built to make it so anyone could use it. “I’ve had the opportunity to sit with [leaders of] Fortune 100 companies that have said, ‘I’ve taken the traditional security solution and give it to really smart guys to analyze,” Kraus said. “It hurt me to think that a Fortune 100 company would have a monopoly on smart people, or that the problem was so complicated that only PhDs from Stanford or PhDs from MIT could solve it.” Eastwind is Kraus’ response to that concern. Its mobile app is designed to be easy for anyone to learn about the health of their company’s network. Its team was assembled to be the “really smart guys” behind the service obviating really smart guys. And the company’s reports are meant to do the thinking for users. All together, this means Eastwind isn’t going to forget anything that might help it detect a breach, and it won’t stop warning its customers about the issue until it’s been resolved. Maybe these features will be enough to convince the companies responsible for millions of people’s private data to heed alerts about a threat. Eastwind leaves stealth to help companies respond to cyberattacks originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
A version of this post was originally published on the Gigaom Research’s Analyst Blog. Uber has hit another speed bump on its way to disrupting the transportation establishment worldwide. The Seattle City Council voted yesterday in favor of allowing on-demand workers the right to unionize, despite their nebulous status as ‘independent workers’ falling in the cracks between 1099 contractors and W-2 employees. As NY Times’ Mike Isaac, Nick Wingfield, and Noam Scheiber put it: … the Seattle City Council plans to vote on a proposed law to give freelance, on-demand drivers like Mr. Creery the right to collectively negotiate on pay and working conditions, a right historically reserved for regular employees. Mr. Creery and hundreds of fellow drivers in Seattle helped push for the legislation. The hope, he said, is to give drivers more say on how much they should be paid. The successful vote in the City Council means the drivers’ organization — the App-Based Drivers Association (ABDA) — will gain an explicit right to unionize on-demand workers, a first in the U.S. This does not mean that the federal government has agreed, and it would be reasonable to expect that ultimately the National Labor Relations Board or the federal courts might get involved. Uber, its competitors like Lyft, and a wide variety of other on-demand work companies have their economics squarely based on the premise that on-demand workers are not employees, and as a result, the companies paying them for their labor can sidestep the expenses and liabilities that come with full-time employment, such as insurance, worker compensation, social security contributions, vacation, sick leave, and the like. The on-demand companies clearly benefit from this arrangement, while — at least in some cases — the on-demand workers would rather gain the protections and benefits that labor laws in the U.S. provide. Such as the right to unionize. Uber is already involved in a California class-action lawsuit on this issue, and the initial finding is that at least one former driver should have been classified as an employee, which Uber has appealed. As I wrote in July (see Handicapping On-Demand Market Sectors) , At core, there is a real question of worker misclassification in the on-demand marketplace. In labor law, there are certain litmus tests to determine whether a supposed contractor is actually a misclassified employee. The employer has a strong incentive to claim that the worker is a contractor, because that allows the firm to sidestep taxes, legal liability, and the purchase and upkeep of equipment (like cars, insurance, and gas, in the case of Uber). If the worker is controlled directly by the company — is told how and when to provide services (like Homejoy’s scheduling appointments instead of its ‘contractors’ making those arrangements, what tools or equipment to use, and specific procedures to follow (like the Lyft fistbump) — then misclassification becomes more likely. Consider the level of control that Handy — a Homejoy competitor — applies to ‘contractors’: Ellen Huet, Contractor or Employee? Silicon Valley’s Branding Dilemma Handy tells its cleaners how to dress, but it also tells them when to knock or ring the doorbell, whether to shake a customer’s hand (always), whether to ask if they should take off their shoes (always), whether they can talk on the phone during the cleaning session (never), and more, the suit says. Those specifications likely make customers feel secure and at ease. They also violate many of the IRS standards for independent contractors, which say that they can’t be told when, where and how to do the work. Uber claims its drivers prefer being contract workers. However, the company doesn’t offer the alternative of full-time employment for drivers, so the experiment to prove their claim hasn’t been run. Whether drivers believe they are employees or contractors — many hold other jobs, and some preclude other ’employment’ — is a factor in the analysis, but may be moot in the final analysis, since so much of the control of the work is in the hands of Uber. The reality is that we may need to develop a third category of worker to better match the times we are living in. Again, from Handicapping On-Demand Market Sectors: On-demand work has risen in the national conscience to the point that presidential candidates are senators are asking difficult questions. Hillary Clinton recently saidthat she plans to ‘crack down on bosses who exploit employees by misclassifying them as contractors or even steal their wages.’ Senator Mark Warner gave a talk at a DC-based think tank, New America, in which he supported the idea of a third category of worker, saying, For many of these online and contingent workers, they’re operating without any safety net below them. They may be doing extraordinarily well — until they’re not, and then there is nothing to catch them until they end up, candidly, back on the taxpayer’s dime. The third worker class has many possible facets, like a way to have a freelancer’s clients contribute on a pro rata basis toward the sorts of benefits that full-timers receive. For example, the half of social security that employers pay for their employees is paid for freelancers, along with the employees half. In a ‘hours bank’ model, clients would pay a share. So if a freelancer worked 10 hours for company X, they would contribute one quarter of that week’s employer-side social security contributions, and other clients would two. So, we’ll have to see where this newest detour leads for Uber and the on-demand economy. It seems unlikely that the genie will be chased back into the bottle and that on-demand work will fade away. However, new protections for the precarious nature of on-demand work obviously need to be put in place, so that the downsides of being an independent worker do not fall solely on the shoulders of the workers, and neither should they be socialized by state and federal governments, so that taxpayers wind up subsidizing the on-demand employers. Uber and the other companies that tout their exponential economics will have to return some of the 20 to 30 percent they slice out of every transaction, and reinvest that in higher salaries, pensions, and insurance for those making the on-demand economy go, ultimately. It’s just a matter of time. Seattle brings more congestion on the highway to the on-demand economy originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Handicapping On-Demand Market SectorsThe risks and rewards for the ride-sharing market in 2014A planning framwork for disruption

Read More...
posted about 1 month ago on gigaom
Facebook has announced two changes to the way it enforces its real-name policy: the first is meant to ensure that fewer people are asked to prove they’re using the same name online that they use in real life; the second to make the verification process easier on the users who will still be required to confirm their identities. The changes follow months of criticism from people endangered by Facebook’s real-name policy, such as activists or victims of domestic violence, and people whose names are unusual or who identify with a name other than the one they were given at birth. (Specific examples of these problems here, here, and here.) These complaints led to the creation of the “Nameless Coalition,” which advocated for Facebook to change its real-name policy to accommodate people who might need to use a “fake” name for their own protection or who identify with another name. Dozens of organizations and individuals supported the coalition’s goals. Facebook’s Chris Cox previously apologized for the real-name policy’s failings and explained that it’s enforced because it’s “part of what made Facebook special in the first place” and it’s the “primary mechanism we have to protect millions of people every day, all around the world, from real harm,” as he wrote at the time. Now, the company will require people to provide additional context when they report someone for using a fake name. “In the past, people were able to simply report a ‘fake name’ but now they will be required to go through several new steps that provide us more specifics about the report,” Facebook said today. “This additional context will help our review teams better understand why someone is reporting a name,” product manager Todd Gage and vice president of global operations Justin Osofsky wrote in the announcement, “giving them more information about a specific situation.” And that’s not the only fix being made. Facebook will also ask people to explain their situations when they’re reported for using a fake name. “People can let us know they have a special circumstance, and then give us more information about their unique situation,” Gage and Osofsky wrote. Facebook will consider this info when responding to the issue. These changes still put the ultimate decision on a person’s identity in Facebook’s hands. The company has no intention of getting rid of the real-name policy, as it’s core to many Facebook services, and the fact remains that the social network will have the power to kick someone out if it thinks their identity isn’t authentic. Still, better to make incremental changes that could help some people than to maintain the status quo because it refuses to nix the real-name policy. Facebook is still learning — there will doubtless be people who abuse the reporting tool to harass others, or who are erroneously flagged — and likely will be for some time. Facebook changes enforcement of harmful real-name policy originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to choose between DIY and hyper-convergence in the data centerHow personal analytics can streamline business decisionsHow and why to implement a successful data lake

Read More...
posted about 1 month ago on gigaom
Facebook is quietly testing a new feature that allows its users to find “local businesses with the best Facebook reviews and ratings” through its website. The feature allows Facebook users to look up everything from automotive repair shops to wedding planners near their locations. (They can also manually search for businesses in a different area.) Results are shown with review snippets, a star rating, contact information, and the address. It is, in other words, just like Yelp. Businesses seem to be ordered based on the number of reviews they’ve received. This makes sense for Facebook — why have something with two reviews as the top result when another business has dozens? — but it’s weird to see a business with a four-star rating appear far above a business that’s often given five stars. Professional Services, as the feature is called, makes sense for Facebook. The company has been trying to become more useful to businesses lately, whether it’s by introducing the Facebook at Work service or allowing them to stay in touch with customers via the Messenger platform, as it diversifies revenue sources. The surprising thing is that it might actually make sense for Facebook’s users, too. Even though Yelp has remained the de facto standard for finding businesses, Facebook might have the edge in some cases. I searched for gyms in a nearby city, for example, and Professional Services had many more reviews than Yelp. Facebook also has the benefit of requiring people to use their real names with their profiles. It’s easier to trust a review from a friend, acquaintance, or family member than it is to trust anonymous users. You know whether your aunt has wonderful taste in hairdressers; you don’t know the same about XxBoWlCuTsxX. But posting under real names might have its drawbacks. GlobalWebIndex, a firm which regularly surveys Facebook users about their habits, told Gigaom that just 10 percent of respondents to a recent survey “posted a negative comment about a product or brand.” Reviews could be skewed by users afraid to be seen as mean. This is clearly a quiet test of a potential feature. Facebook never announced that it’s testing something like this, and a request for comment on this story wasn’t immediately returned. The company isn’t yet competing head-to-head with Yelp, but I wouldn’t be surprised if Professional Services gets more attention soon. h/t Search Engine Land Facebook tests Yelp-like service devoted to local businesses originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to choose between DIY and hyper-convergence in the data centerHow personal analytics can streamline business decisionsHow and why to implement a successful data lake

Read More...
posted 2 months ago on gigaom
Microsoft is relaunching its Bing Pulse audience engagement and market research service to make it clear that it reaches farther than the search engine. The service will now be called Microsoft Pulse — no relation to the LinkedIn Pulse news app — in honor of its connections to Skype, Yammer, and other tools. A pilot version of Pulse was tested by Fox News during the state of the union address in 2013. The broadcaster used Pulse to ask viewers how they felt about the president’s speech, analyze the responses, and show them on its broadcast. More than 700,000 respondents voted almost 13 million times in a single hour. That early success led Microsoft to expand the platform to other broadcasters, teachers, and conference runners, all of whom use Pulse to poll their audiences. Pollsters can use the service, which is connected to Microsoft’s Azure platform, to get responses from anyone willing to click a link and answer a few questions. “It’s an engagement tool and a surveying tool,” said Microsoft technology and civil engagement director Dritan Nesho. “You’re both solving for the problem that most online polls find of having very low responses, because you have them engaged with a particular form of content, and you’re collecting that feedback.” The idea is that Pulse can create a feedback loop: Broadcasters solicit opinions via email, social platforms, or on-air callouts; viewers interact with Pulse instead of playing a game or checking their social feeds; and their responses are put on the screen for them to enjoy. It’s something like a perpetual entertainment tool. “Whenever people are watching a program that’s being broadcast on TV or participating in meetings and conferences, the majority of them are interacting with their phones,” Nesho said. “Often times what they’re doing is veering their attention away onto someone else’s platform and some other form of content.” Pulse is supposed to help presenters keep their audiences engaged with them. Now the tool is tied to many of Microsoft’s other platforms. Skype for Business can be used to solicit opinions from people watching a massive teleconference. Yammer can collect sentiment from a company’s workers. Microsoft’s Power BI service can be used to analyze the data. And the Azure platform powers it all. Hence the relaunch. Much of the service remains the same — it’s still free to use, it’s still connected to Bing, and its core purpose is unchanged — but rebranding the service gives Microsoft more freedom to continue expanding it. (Nesho said the Pulse team would “love to be integrated with Xbox very soon,” for example.) “Our current focus and our current goal is making sure that people start using this product frequently, at scale, and get their reactions to it so it’s something that becomes part and parcel of their every day activities and becomes useful to their needs for engaging the customers they have,” Nesho said in our interview. Dropping the “Bing” from the name is just one way for the company to do just that. Sure, it’s picking up a “Microsoft” in the process, but at least then it’s clear that Pulse is tied into many of the company’s services instead of seeming like it’s restricted to the also-ran search engine that never threatened Google’s throne. Bing Pulse engagement and research tool drops the “Bing” originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted 2 months ago on gigaom
3D printing technology has made a lot of advancements lately, prompting people to create more useful objects. People have always dreamed about being able to select a car online, download a design, and print it in the privacy of their own home. That dream is quickly becoming a reality thanks to developments from Local Motors. Innovative technology has made it possible for the car manufacturer to create the world’s first ever working 3D-printed vehicle. The Strati The idea of a 3D-printed car is not a new one. Before the LM3D Swim, Local Motors built the world’s first ever 3D-printed car, the Strati. Built and printed in Detroit, this electric car was the first step in mass producing printed cars. Strati changed the way the world thought about 3D-printed vehicles. In 2010, printed cars were created, such as the Urbee, but they weren’t as mechanically involved as the Strati. In the past, car panels and features were printed, then placed on a traditionally-built structure. This meant that important components, such as the battery or motor, were not created using the 3D-printer. The Strati used direct digital manufacturing for the majority of the components. Building a 3D car isn’t easy. To “print” a Strati, Local Motors had to first create the body using a Big Area Additive Manufacturing (BAM) machine. After the body was printed, subtractive manufacturing using a computer numerical control routing machine, or CNC, was used. That still didn’t include all of the features. Additional components were added over the course of several days. Manufacturing took a total of five days, with 44 hours of printing. Local Motors plans on using this innovative process to further explore car customization and, eventually cut manufacturing costs. While the Strati was a small electric car, the company hopes to appeal to a wider audience by offering several different 3D-printed vehicles. Sport versions of the LM3D Swim, for example, are expected to be produced in the future. Local Motor’s new LM3D Swim. The LM3D Swim Expected to be released in 2017, the Local Motors LM3D Swim uses a unique manufacturing technique. While the Arizona-based company isn’t a household name yet, it does have a history of working quickly to create innovative designs. The LM3D prototype has already been produced, but future models will have a slew of customizable features. Because each vehicle is being 3D printed, buyers will be able to select from several different aesthetic features. Removable panels are a possibility, which would allow buyers to have much more control over the design they choose. Despite the advanced 3D capabilities, all vehicles would have the same powertrain and electronic engine. Not all of the components will be 3D printed in the comfort of your own home. Body panels and the chassis would likely need to be traditionally manufactured. Local Motors has been working on a way to have as many parts printed as possible. As much as 90 percent of the car will be printable using a composite ABS plastic and carbon fiber material. Even upgrades wouldn’t likely be performed at home. Local Motors claims it plans on melting each car from time to time in order to provide key upgrades. By melting unwanted components, the company can easily recycle them, cutting down on costs and waste. Can you print a car at home? While 3D printing technology does make it possible for buyers to print a car at home, it is impractical for them to do so. Local Motors is currently constructing a new microfactory to print and assemble the vehicles in. Construction in Knoxville, Tennessee is expected to be complete early next year, allowing the company to continue to focus on car designs and capabilities. Local Motors is striving to get the LM3D series on the market. While each car has a hefty price tag of $53,000, the company is expecting several pre-sales. Preorders for the vehicles will start in 2016, but it will still be at least a year before anyone gets their 3D printed vehicle. Because the manufacturing process and car style is still untested, federal regulations will require several tests to be performed before sales can begin. Standard crash testing is expected to start in 2016, with highway certifications quickly following. 3D printing technology has a lot to offer the automotive industry. The innovative technology that Local Motors is using will pave the way for more designs and advancements in the future. With successful pre-sales and testing, Local Motors can become a household name. Matthew Young is an automotive reporter from Boston. As a freelance journalist with a passion for vehicles Matthew writes about everything on 4 wheels, be it race cars, SUVs, vintage cars, you name it. When he is not at his desk writing he can be usually found helping his dad in the garage. You can reach Matthew @mattbeardyoung. Exploring the world’s first 3D-printed cars originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A market analysis of emerging technology interfacesThe legal challenges and opportunities for 3D printingBitcoin: why digital currency is the future financial system

Read More...
posted 2 months ago on gigaom
Facebook won’t let the Moments app join its growing pile of abandoned projects. Even after it shut down its Creative Labs division and the experimental apps that emerged from it — including Riff, Slingshot, and Rooms — the company is doing its best to convince its users download the standalone photo-sharing application. To do that, the company will remove the ability to synchronize photos across multiple devices from the main Facebook app. (You know, the one that has steadily become less important as many of its functions are split into standalone applications like Messenger.) Now the feature will be exclusive to Moments. A Facebook spokesperson provided Gigaom the following statement via email: Starting this week, we are beginning to phase out Facebook’s photo syncing feature. This is an opt-in experience that syncs photos taken on your mobile phone to a private section on Facebook, viewable only to you, where you can view or post the photos if you choose. The feature was launched in 2012 when people took photos on their phones, but still posted primarily from computers. People that use the photo syncing feature will have the option to move the photos they’ve previously synced to our new app Moments, where they will be able to view, download, or delete them. If they don’t want to download Moments, you will also be able to download a zip file of your synced photos or delete them from your Facebook profile on your computer. Some users could welcome this change. Moments is much better at syncing photos than the main Facebook app, and it comes equipped with features like facial recognition and tools that make it easier to get a friends’ photos from an event, so it’s not like Facebook is forcing an incompetent service on its users. But it’s hard not to view this as yet another of Facebook’s attempts to become the primary interface people use to interact with their digital lives. No longer can someone download the Facebook app and do anything they want with the service — now they must install a bunch of standalone apps to achieve the same result. The main Facebook app offers access to the news feed; Messenger lets people stay in touch with friends and family; Moments helps people manage photos; Instagram allows them to share photos with the outside world; WhatsApp makes it easy to stay in touch with people who don’t use Facebook; the list goes on. And we’ve reached the point where even those apps have standalone utilities. Instagram has Layout, Boomerang, and Hyperlapse. Messenger has Selfied, Strobe, and Stickered. It’s surprising that WhatsApp hasn’t been broken into multiple pieces, or spawned a bunch of little apps that augment its service. Releasing all these standalone apps does make things easier for Facebook users. People don’t have to download Messenger, Instagram, or their add-ons if they don’t want to. Sure, they’ll be limited by what the main Facebook app can do, but they won’t have to jump between multiple apps to accomplish simple tasks. The gambit also gives Facebook more and more reach on people’s home screens, though, and that could mean more time spent with its products. Instead of being confined to a single app icon, the company could now fill most of a home screen with just its applications. Facebook, in other words, is breaking out of its box. It’s easier to do that with Moments than with the apps Creative Labs introduced. So even as it cleans house, Facebook appears to be doing its damnedest to make sure people who might benefit at all from Moments’ photo-syncing are going to download it, use it, and devote even more of their home screens to its services. Facebook to make photo-syncing feature exclusive to Moments app originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to choose between DIY and hyper-convergence in the data centerHow personal analytics can streamline business decisionsHow and why to implement a successful data lake

Read More...
posted 2 months ago on gigaom
One of Silicon Valley’s “unicorns” (that is, a tech company valued at over $1 billion), Atlassian is the company behind JIRA, HipChat, Confluence and BitBucket, all of which are aimed at making collaborative efforts within companies easier and more efficient. The company is one of Silicon Valley’s oft-fabled “unicorns” — that is, a company for which the valuation has surpassed the $1 billion dollar mark — and last week the company saw its shares jumping over the initial price of $21 to just over $27, where it has held for the most part.  Atlassian was founded in 2002 and specializes in workplace software. Most of their products are aimed at streamlining workplace communication and simplifying collaboration in teams.  HipChat, one of its most popular products, is an email-buster comparable to Slack that brings ongoing correspondence out of lengthy email threads and into a simple chat interface shared by teams and departments within a company. JIRA Software is a project-tracking software development tool. JIRA Service Desk is a task management platform that allows teams to coordinate the living, breathing, changing tasks that often become the foibles of service teams everywhere. From BBC to Adobe and NVIDIA to Land Rover, Atlassian products are used by over fifty thousand teams worldwide. Which is great, but ultimately just the tip of the iceberg where the company’s concerned. With the successful IPO under their belts, Atlassian’s chasing down some seriously lofty goals. “Our mission, ultimately, is to have every employee inside of every company using Atlassian products every day,” says Atlassian President Jay Simons. “And when you consider that there’s more than 800 million knowledge workers around the world, that’s a pretty big ambition and it’ll take a while to get there. The IPO doesn’t really change that. That’s basically been a goal of the company since inception.”  A pretty big ambition, indeed. But it’s a pretty big market, too, and it’s no secret that email’s not particularly well-suited to the way that we work today. Inboxes that tend to get cluttered paired with our own abysmal skills when it comes to staying on top of the constant digital deluge, email’s become something of a dirty word in some circles.  Though email’s something of a necessary evil that likely won’t be going anywhere (no matter how much I wish the opposite were true), Atlassian products exist largely to bring conversations and collaborative efforts that don’t belong in our inboxes into more appropriate arenas. Even with fifty thousand companies already onboard, there are still thousands of teams stuck in the cluttered trenches of email-only communication. “I think there’s a tremendous amount of white space across teams with a lot of inefficient use of email,” says Simons. “I don’t think email’s going away anytime soon because it is an effective way to direct certain kinds of communication to people, but I do think that when you use our products, your inbox becomes a lot smarter, more directed and more appropriate for what email’s good at.”  In Simons’ eyes, the successful IPO signals a recognition that what Atlassian’s doing is not only working, but that there’s room to grow—more tasks to manage, more email chains to prevent, more projects completed on-time with fewer hiccups and dropped balls. The way we work is changing, and the response yesterday would seem to suggest that Atlasssian’s going to be around to usher in some of these changes in the way we get things done. “I think that the market and the investor enthusiasm recognizes that we’ve built a pretty special company,” says Simons, “and also recognizes that there’s a big opportunity in front of 800 million knowledge workers worldwide and teams all over the place that are trying to figure out how to work better together.”  Atlassian’s IPO is just part of its lofty goal for the workplace originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How work media tools are shaping business in 2015New ways to map how businesses operateWorkplace tools and tasks that will change in 2015

Read More...
posted 2 months ago on gigaom
Uber chief executive Travis Kalanick must feel like Santa Claus. While other tech companies have killed off various products during this holiday season, Uber has introduced new experiments in Toronto, Seattle, and Chicago. Merry Christmas! But I doubt the company is handing out these gifts because of its holiday cheer. It started with an expanded UberEats service in Toronto. The test will allow people who live in the city to download a standalone app — the first Uber has released — to order food off the full menus of some one hundred restaurants. (A limited version of the service with a restricted menu is available in other cities.) Then the company decided to test a service called UberHop in Seattle. It’s a glorified bus route: People will pay $5 to be picked up at select locations and dropped off at pre-determined destinations at particular times. Uber is, as New York magazine put it, emulating the most hated form of public transportation. It also announced UberCommute, a service that will allow Chicago residents to carpool to reduce the cost of the trip for the driver and the passenger. Drivers will be partially compensated for gasoline used on the trip, and passengers will pay less than they would if they summoned a ride through Uber’s other services. Of these services, the one that makes the most sense is UberEats. There isn’t much point in using UberHop instead of an existing bus system, and something like UberCommute seems likely to exacerbate Uber’s violence and harassment problem, given that it’s not running background checks on the service’s drivers. But the services themselves aren’t important. The key thing here is that Uber is teasing potential investors with all the possible expansions its company might make — and it’s doing so while trying to raise as much as $2.1 billion in funding with a valuation of $62.5 billion, according to a report from Bloomberg Business. This could be part of the standard fundraising tactic: Strut your stuff, promise to innovate, and watch as the billions start to roll in. It might also, as Pando’s Sarah Lacy argued last week, be a result of Uber’s inability to tout the potential of the Chinese market to investors. These pilot programs help Uber’s cause either way. Uber’s heart hasn’t grown three sizes this month. It’s just finally decided to use its potential to branch into areas other than rides between points “A” and “B” to woo potential investors. Considering that the company has already raised more than $8 billion in funding, there’s a good chance this strategy will work for it. Uber bucks holiday trend of killing products by testing new services originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Handicapping On-Demand Market SectorsA planning framwork for disruption4 frameworks for understanding industry change

Read More...
posted 2 months ago on gigaom
 Tim is CEO at DataSift.  Cookies were once the kings of the advertising industry, but times are changing. Once upon a time in the world of online marketing (or five years ago), every time you visited a website, a pixel would drop a web cookie into your browser. Cookies served as a tagging device to identify your computer among the millions of other users browsing the Internet, tracking your activity as you visited pages online. This meant marketers could use this “cookie profiling” to reach you via an ad exchange, or an automated pool of ad impressions. Marketers targeted potential customers based on their browsing patterns, displaying ads tailored to age, marital status, or political affiliations. When one person equaled one browser, this targeted ad model worked. Fast-forward to now, with consumers spreading their digital time among various devices and apps. Today’s consumer is a multi-tasking, multi-talented research expert who communicates, researches and shops on multiple devices. In fact, industry reports estimate that consumers spend an average of three hours each day on their mobile devices. For marketers, following a consumer’s journey across digital destinations and touch points is difficult. And for those marketers who rely on cookies to track your digital trail, their task is a difficult one. They must connect all the cookies across all of the devices and apps that you use. How can companies track consumer demand as well as anticipate and meet consumer needs? How can marketers best listen to customers? Mobile ate the cookie Traditionally, marketers relied on pixel tracking, a cookie-based technology that uses code to anonymously follow people around the Web as they visited webpages. When a potential customer visits a page, the code drops a browser cookie. The cookie then determines when to serve ads, ensuring ads are sent only to those who have previously visited a particular site. Pixel tracking provides information on who is clicking through on ads and ending up on a sales page. However, five years from now, we’ll look at companies that use pixel-based marketing in the same way we view someone who still has a GeoCities email address — another addition to the Wayback Machine. eMarketer estimates that by 2018, mobile will account for 70 percent of digital marketing spend. Mobile is complicated, as mobile browsers don’t support cookies. Given consumer mobility, pixels and cookies fail to connect users as they shift between their various devices. People, and why they matter more than ever We are moving from pixels to people — more specifically people-based marketing. People-based marketing allows the marketer to learn – from actual humans — about an audience’s sentiments and reactions to a company’s products, performance and prices. For example, Facebook Atlas demonstrates that there is a simpler way to enable marketers to reach people that does not involve using pixels. Atlas represents the two foundations of people-based marketing: allowing consumers to establish their identity through opt-in/log-in and accounting for consumer cross-device and cross-channel activity. Brands and agencies, with Atlas, can measure ad campaigns across screens and solve the cookie issue, targeting real people across mobile and the Web. Atlas uses Facebook’s ID (rather than a cookie) to follow a user’s journey from mobile to desktop and back. Humans are at the core of people-based marketing, of course, and mining human data intelligence is integral to marketers understanding their audience. Human data, which includes real-time feedback from real humans, is quite valuable. Brands can analyze this information to track customer responses, patterns and trends to refine their competitive edge. The ubiquitous nature of social media has prompted businesses to adjust their marketing strategies accordingly. Recent research indicates the average American spends 37 minutes per day on social media. Also, 46 percent of web users look towards social media when making a purchase. Meanwhile, 65 percent of B2B marketers invest in social media to gain market insights. Human marketing requires human input The marketer’s goal has moved beyond serving ads to the same people on the same device toward gaining a greater understanding – and even anticipating – consumer buying patterns. When you consider that more than half of all site visits are not even made by people, but by bots, it’s clear active human participation is vital to successful marketing. For example, to properly engage with potential customers regardless of device, ask their permission by requesting their email address. Urge customers to opt-in and provide product preferences so that you, the marketer, can personalize and improve the customer experience. Brands and agencies must first understand their audience before they can build and nurture customer relationships. This “understanding” involves going where customers are (social media on mobile devices), listening, communicating the value they offer to the consumer, and enabling user control. Modern marketing is really people-based marketing and should involve choice and convenience across multiple devices. Only then will today’s marketers gain an accurate understating of their market, customer, and future customers. People-based marketing is key to humanizing the consumer experience originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Applying lean startup theory in large enterprisesThe mobile shopping apps consumers value mostOnline video courts TV dollars

Read More...
posted 2 months ago on gigaom
As businesses wind down for the holiday period, they’ll need to keep their cyber defenses up. While executives are tucking into their dinners, hackers will be trying to tuck into their businesses’ data. High profile breaches this year at organizations ranging from Anthem Healthcare to Ashley Madison and the US government’s Office of Personnel Management are a reminder of the threats that lurk online. And they raise the question of whether the cyber security industry can come up with a powerful new tool to frustrate the bad guys. There’s been plenty of discussion at security conferences about the impact that machine learning will have on the cyber landscape. A subset of artificial intelligence, it involves the use of powerful algorithms that spot patterns and relationships from historical data, and get better over time at making predictions about brand new data sets based on this experience. Companies such as Amazon and Netflix use machine learning to help drive their recommendation engines, and banks and other financial institutions have long used it to tackle credit card fraud. Now, we are starting to see some cyber security firms offering solutions that involve a machine-learning component. Huntsman Security, which counts intelligence agencies amongst its clients, recently announced what it claims is the security industry’s “first machine-based threat verification technology” that uses machine-learning algorithms to help analysts spot serious threats swiftly and take corrective action. Startups such as Cylance, Palerra and Darktrace are also employing machine-learning techniques in their services. [Disclosure: Wing Venture Capital is an investor in Palerra.] It’s tempting to portray machine learning as a silver bullet that can be used not just to wipe out hackers, but also to wipe out jobs, too, by automating tasks performed by expensive personnel. This has provoked a backlash from some commentators, who have warned companies not to waste money on an unproven technology, and encouraged them to invest more in security teams and other tools instead. However, that critique is based on a false claim about the technology’s potential — and a false dichotomy between human and machine. Let’s take the issue of efficacy first. Machine-learning models work best when they can “train” on large volumes of data. Thanks to the rise of big data and extremely cheap storage, it’s now possible to feed vast amounts of information into models, which greatly improves their ability to detect suspicious activity. The goal is to distinguish anomalous behavior in things such as network traffic that might indicate a breach while minimizing false alerts (or “false positives” to use the industry’s terminology). There are certainly challenges to be overcome. Algorithms are only as good as the quality and quantity of the data they are trained on, and data sets on the most sophisticated kinds of attacks mounted by nation-state actors (or their proxies) are still relatively thin. Sophisticated hackers can also try to fool models by employing tactics that seek to convince them that malicious activity is in fact legitimate. In spite of such caveats, the machine-learning approach is still a great asset in a defensive arsenal. Given the volumes of data that security teams now have to deal with, adopting a more automated approach to querying network traffic and looking for anomalies that are not detected by traditional, signature-based systems makes sense. For instance, an analyst who has threat intelligence which suggests a network may be subject to a particular kind of data exfiltration attack could task a machine-learning model to look for telltale signs of this. Models can also provide analysts with other valuable insights, such as correlations between suspicious events. To minimize false positives, many models rely not just on “unsupervised learning”, which involves crunching data to spot patterns themselves, but also on customer-driven, “supervised” learning. This can take the form of specific security policies, such as one that requires an alert to be issued if a bunch of sensitive files are suddenly sent to a new location. It can also involve analysts giving a digital thumbs-up or thumbs-down to alerts issued. Over time, this training can help a model to identify what really matters to an organization and reduce the risk of false alerts. Will human trainers ultimately be displaced by the “machines” they teach? Some companies may use machine-learning as an excuse to downsize, but I think they’ll be the exception rather than the rule. When I speak to chief information security officers, I often hear that they are concerned about a worrying shortage of skilled cyber personnel. By putting machine-learning models to work in support of existing staff, security leaders can boost productivity and free up their teams to work on the most pressing and strategic issues. There is another consideration that might resonate at this time of year. Algorithms don’t need to take a holiday, so they can keep on working while some of their human masters are taking a well-deserved break! Martin Giles is a partner at Wing Venture Capital (@Wing_VC). He was previously a journalist with The Economist. For cyber security, machine learning offers hope beyond the hype originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How machine learning and predictive analytics are changing data scienceMaking data analysis accessible to business usersWhy transparency matters when it comes to data analytics

Read More...
posted 2 months ago on gigaom
Making sense of your personal finance is a lot like flossing — it’s tedious, it’s not what you might call “fun”, but it needs to be done. Otherwise, things start to get messy in a hurry. There are dozens of tools and apps aimed at improving our (occasionally tenuous) grasps of our own finances, but there’s a new service that hopes to simplify the way we interact with our money. It’s called Trim. The concept behind Trim is dead-simple: sign up, connect your credit card, and Trim will sift through your transaction data to find recurring payments and help you cancel any you decide you don’t want.  The idea came about when Trim co-founder Daniel Petkevich, who considers himself a pretty financially responsible person, found recurring payments that he wasn’t aware of on his statement, including a renter’s insurance policy for an apartment he no longer lived in. The realization that money was slipping through the cracks every month virtually unnoticed led he and co-founder Thomas Smyth to starting Trim.  It’s Smyth who likens the practice of getting one’s financials in order to flossing. It’s as unglamorous as it is necessary, but Trim makes at least one aspect of personal finance management a little less painful in less time than it takes to floss (probably — I don’t really know your life or how long it takes you to floss, but it took me all of one minute to get underway). Once you connect your credit card, Trim’s algorithm sifts through your transaction data to find subscriptions — things you’d expect, like Netflix, Hulu, Amazon Prime and maybe a few that you’ve forgotten you’re paying for or haven’t gotten around to cancelling.  I know what you’re thinking: is this safe?  Petkevich breaks it down for me and the short answer is: totally, with the help of Plaid, an API designed to securely handle bank data. Plaid’s raison d’être is to allow developers to access financial data securely, without risk to banks and customers. To connect your credit card to Trim, you simply login to your bank through Plaid and an encrypted read-only token is sent back to Trim.  While Trim can help you cancel subscriptions you don’t want, it can’t access your accounts directly. No need to worry about attacks on Trim’s servers, either. They’re protected with Amazon Web Services (also used by NASA and the DoD) and 256-bit SSL encryption. Oh, and even if someone was feeling extra motivated and did manage to find a way into the servers, there wouldn’t be anything to steal — Trim doesn’t store your username or password or any other sensitive data.  “Through these integrations with the banks, they only give read-only access tokens,” says Smyth, “so there’s literally no way for anything to go wrong or weirdly with your account.”  After Trim’s algorithm has had a chance to parse your transaction data, you’ll get a text detailing your subscriptions, from Spotify to the Wall Street Journal.  For those diligent folks who keep careful track of monthly statements and expenses, these probably won’t come as a surprise, but it’s certainly helpful to have a monthly breakdown of just how much you’re dropping in subscriptions every month. Those who tend to be a little less detail-oriented with monthly transactions, however, may find something rotten in the state of Denmark.   Whether it’s LinkedIn Premium charges, a Wall Street Journal subscription, or the notoriously difficult to cancel gym memberships, Trim is good at weeding out the invisible financial skeletons in one’s closet. And while the average person saves about $15 per month ($180 per year) with the service, the current all-time high for unearthed monthly subscriptions is a baffling 95 for a single customer. Trim is totally free, and Smyth says that they want to keep it that way. Right now, Trim is backed by private venture capitalists and while they may one day consider adding a premium tier that includes more in-depth analysis and financial coaching, he says that because Trim’s service is essentially software that doesn’t cost anything to run, it doesn’t seem quite right to charge people in order to help them money. There will always be a free version, and they mean really, truly free. “Personal finance is something you can always put off until another day, and we want to make it something you can today just by making it as simple as possible,” says Smyth. “I do think it’s really, really important for us to just spend a minute doing something to get our financial lives a little more in-order, and that’s where we want to help.”  New startup aims to ‘Trim’ the fat from your monthly spending originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted 2 months ago on gigaom
Twitter is testing an expansion of its promoted tweets advertising unit that displays the sponsored messages even if someone doesn’t sign in to its service. The test could help Twitter address one of its core problems: People don’t have to sign up for its service to take advantage of the public nature of its contents. Anyone capable of searching for someone on Google, or following a direct link to a public tweet, can peruse the platform even if they tweet something themselves. Many people take advantage of this fact. Re/code reports that roughly 500 million people who don’t have Twitter accounts visit the service each month. That’s greater than the 320 million users who log in to the site each month, according to Twitter’s usage statistics, and a valuable audience for advertisers. “By letting marketers scale their campaigns and tap into the total Twitter audience, they will be able to speak to more people in new places using the same targeting, ad creative, and measurement tools,” Twitter said. “Marketers can now maximize the opportunities they have to connect with that audience.” According to the blog post, promoted tweets will be shown to visitors without Twitter accounts whenever they view a user profile or a tweet’s detail page. The experiment is currently limited to the desktop Web  and being tested by “selected advertisers” across the United States, Japan, United Kingdom, and Australia. This isn’t the first time Twitter has tried to appeal to new or potential users. Earlier this year the company introduced a new homepage meant to lure people into signing up for its service. It built “instant timelines” for people who sign up without knowing what to do next. The service has done everything short of making it so people can post, like, or retweet something without an account. These promoted tweets seem like an admission that not everyone who visits Twitter will grok the value in creating an account. But someone deciding not to sign up for the service can still be monetized as long as they’re willing to visit the site every once in a while and focus even a moment of their attention on an ad. Twitter monetizes 500M strong audience of non-users originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.IoT Hardware OpportunitiesHandicapping On-Demand Market SectorsA planning framwork for disruption

Read More...
posted 2 months ago on gigaom
Uber is testing a service that will allow Toronto residents to order food from their favorite restaurants through a standalone application called UberEats. That name might seem familiar. That’s because Uber’s been testing the service, which was previously limited to a handful of meals, inside the main Uber app. Now it’s giving users in one city a chance to determine whether or not UberEats can stand without being propped up by the popularity of Uber’s core service. Not that the two can really be separated: Food ordered through UberEats will be delivered by Uber drivers who might be ferrying passengers at the same time. Uber’s strength lies with that network of drivers (at least until self-driving cars take to the streets) and their willingness to drive around non-humanoid objects. UberEats is Uber’s first standalone app. Whenever the company experimented with other services in the past, whether it was delivering puppies or fiddling with a healthcare service, it did so as an addition to the Uber app people already use. Now, as the company told Wired, it wants to give UberEats room to breathe. Toronto users can order food through UberEats between 10am and 10pm seven days a week. Right now the company is said to support the full menu of “over a hundred” restaurants in the city. The service’s Instant Delivery menu, which is available in the main Uber app, offers fewer options but much faster deliveries. Deliveries from UberEats will be free until 2016. After that it’s not clear how much Uber intends to charge for the service, or whether it will be subject to the same “surge pricing” model that raises the cost of booking a ride with its main service when there’s inclement weather, high traffic, or, previously, emergencies. UberEats is said to have taken three months to develop. So while the app seems like the realization of everyone’s belief that Uber will eventually move essentially anything, it’s hardly a bigger commitment than its other experiments. If you’ll pardon a food pun: Let’s not mistake an appetizer for the main course just yet. Uber teases foodies with standalone UberEats app originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Handicapping On-Demand Market SectorsA planning framwork for disruption4 frameworks for understanding industry change

Read More...
posted 2 months ago on gigaom
LinkedIn is expanding its Lynda.com platform to Roku devices, and in doing so it might prove streaming video services can be more than mindless entertainment. First some background. LinkedIn spent $1.5 billion to buy Lynda.com in April. The platform boasts more than 4,000 courses featuring 150,000 videos made by expert instructors, and despite an emphasis on high production values, LinkedIn said in an email that it’s adding more lessons to the platform every single day. A shot of Lynda’s new channel on Roku set-top boxes. The app available on Roku devices will provide access to all of these videos. It will even synchronize a user’s position in various lessons across devices, so they don’t have to worry about losing their place if they move from a TV to a laptop. The catch: Most videos are exclusive to members who pay $20 to $35 per month. “Our goal is to extend the Lynda.com footprint and create a new channel for users to engage with our content, while providing a consistent and seamless experience across multiple screens,” a LinkedIn spokesperson said. “Now you or your family members can learn new skills from the comfort of your couch.” Or they could do something cheaper. They could get access to countless movies and television shows from Netflix for $10. They could watch commercial-free television on Hulu for $12. Hell, they could even get access to HBO’s original programming and videos unavailable on other streaming services for just $15. Compare that to the $25 a single month of Lynda.com access costs — the lower $20 price is for people who pay for the service annually instead of monthly — and it’s easy to see where a budget-conscious person might choose to spend their money. How’s education supposed to compete with endless entertainment? There are some real benefits to having an app available for set-top boxes, prime among them is the ability to follow along with a lesson on a laptop without having to switch between multiple windows. It could also help more people learn about a skill in a group setting instead of being an otherwise individual activity. Existing subscribers to Lynda.com might rejoice at being able to view the platform’s lessons on television sets. But with a monthly fee that could cover two other streaming services (almost three for Lynda.com’s premium members) it’s hard to see the Roku expansion getting more people to sign up to the platform. That might change if Lynda.com’s subscriptions ever fall in price. Until then, however, it looks like the mindless entertainers are going to remain undefeated. LinkedIn expands Lynda.com to Roku with new learning channel originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How personal analytics can streamline business decisionsHow businesses can provide mobile application discovery and promotionThe risks and rewards for the ride-sharing market in 2014

Read More...