posted 1 day ago on gigaom
When Microsoft  launched Azure Basic a few weeks back, it may have started a trend. Microsoft, while matching Amazon’s previously announced price cuts, also unveiled a set of “basic” general-purpose instances that are 27 percent cheaper than its own “standard” instances. What Basic doesn’t include is the load-balancing or auto-scaling that come as part of standard instances, according to a blog post.  Fast forward to this week when Amazon roped off older EC2 instances from their newer brethren. When users go to buy EC2, they’ll be steered to the shinier, more muscular instances but if they really want the older stuff, it’ll still be there — if they look for it. At least for now. Constellation Research analyst Holger Mueller expects we may see two-tier pricing from all IaaS vendors, with Microsoft Azure Basic blazing the trail.  Amazon did not respond to requests for comment. Others characterize Amazon’s move as a continuation of its tendency to offer the newer stuff at a (slight) price premium and pushing older inventory to the back of the shelf. AWS has been doing “an almost GM-like model year-approach with new models being better value for money,” said Petri Aukia, CEO of Codento, the Finnish cloud computing consulting company. In his view, Amazon continuing to sell aged infrastructure is akin to IBM selling mainframes. One thing is clear, the entry of Microsoft and Google — companies with huge resources — into public cloud infrastructure will buffet the price models like gale force winds. Google signaled another option with its sustained-use discounts that kick in automatically when workloads hit a certain bar of utilization. That’s a really attractive option even for AWS devotees who love their cloud but are sick and tired or tracking its pricing and utilization. Would Google consider price tiering? Not likely, according to Navneet Joneja, Google Cloud Platform product manager who said developers building new applications want the latest gear. “We believe they should not have to live with out-of-date technology or limit themselves to less than a full platform just to get lower prices,” ne noted, adding that Google’s sustained-use pricing gives them better value with the latest technology,” he said via email. I’m inclined to agree with Mueller that most cloud infrastructure players will end up sectioning off different substrates of base infrastructure at different price points based on age and capabilities. How customers want to buy cloud infrastructure — and how vendors will sell it — will be a big topic at the Gigaom Structure event in June, so check it out.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The Structure 50: The Top 50 Cloud InnovatorsWhat you missed in cloud in the third quarter of 2013What developers should know when choosing an MBaaS solution

Read More...
posted 1 day ago on gigaom
As chipmakers realize the power and increasing amount of silicon is inside our connected devices they are racing to own as much of the market as possible while publicizing their work in the internet of things. Yesterday, for the first time, Intel broke out details regarding the revenue associated with the internet of things. It was up 32 percent year over year to $482 million, a veritable nanometer of Intel’s $12.8 billion in total revenue for the first quarter. But the fact that the world’s largest chipmaker decided to break out its IoT platforms and software products marks a trend in the chip world. But unfortunately for giants like Intel, which have focused on the high-end x86 chips in servers and personal computers, the internet of things might have a lot of promise but its average selling price for silicon is generally low. Also, it requires a very different mindset on how to build and market chips. Even ARM, which is currently king of the smartphone and tablet landscape with its high-end application processor architecture, sells the IP for its lower-end microcontroller cores found in many IoT products for less per license on average. Are chipmakers ready for this? So the question for chipmakers at the high end is whether or not they can make it in a market selling a high volume of chips with low-end pricing. Or whether they want to get into investments in software and services to offset the lower profits associated with the IoT silicon. Sensor hubs, microcontrollers and more! Meanwhile, for the companies that have been in the embedded markets for decades selling radios, sensors and microcontrollers that are smaller than 32-bit, the internet of things is a huge opportunity that’s right in their wheelhouse. I’ve covered this before, but focused mainly on microcontrollers and sensors. But thanks to the introduction of the iPhone 5s last year and its dedicated M7 motion sensing processor there’s a new opportunity that IHS iSuppli says will grow 154 percent from last year to the close of 2014. The analyst firm calls this market “sensor hubs,” and defines it as any processor that takes in and compute sensor data to avoid using a device’s application processor (if it’s a phone) or microcontroller if it’s a smaller device. It estimates that worldwide shipments of sensor hubs in 2014 will reach a projected 658.4 million units. From then until 2017, the market is pegged to increase 1,300 percent to shipments of 1.3 billion units (see chart below). The creation of a new processor type is worth noting, but it’s part of a an interesting trend that companies ranging from Ateml and Qualcomm to Freescale and Texas Instruments are specifically building around — a market that needs many modular, small and low-power products for everything from a connected fridge that runs on a home’s power to a microcontroller and radio tucked under the jewel of a ring. This isn’t a variation in performance that Intel is terribly familiar with; this a fundamentally mix-and-match mindset that optimizes not just around performance — or performance and power — but also around size, different types of sensors and even power management functions for specific battery types. Intel was late to the maker market with its Galileo boards, which launched last fall featuring a new, smaller Intel chip. That means others are racing far ahead getting their products into as many hands in many different formats as possible. There are countless people building on open source platforms using chips from Broadcom or Atmel to make variations on the Arduino or tiny Bluetooth radios designed for wearables or even whole new processor designs. Check out the MicroView or the TinyDuino for examples of much-needed innovation on the basic computing offered by the Arduino. The Microview prototype. You don’t even have to be a totally open platform like the Arduino either. ARM is backing Sunrise Micro Devices in the hopes of making a radio module containing a microcontroller with a longer battery life because it sees a need for a higher-level package for makers and product designers experimenting with the internet of things. The idea is that one of these could be in the next Pebble watch or even in hundreds of devices as a basic component in certain types of clothing, much like those light up sensors are in kid’s shoes today. Challenge also means an opportunity And it’s not just existing processors and chip vendors trying to offer the wider array of features and components in a variety of sizes and power budgets: there are whole new opportunities for processor designs. For example, the sensor hub concept is popular for accelerometers and gyroscopes, but one might eventually also add GPS into the mix, then offloading location. Smaller GPS chips such as the one announced Monday by CSR and OriginGPS. Outside of sensor hubs, a startup called Ineda has launched with $17 million in funding to build a microprocessor designed for wearables that boasts a 30-day battery life and is fundamentally different architecture than exists today. There is clearly opportunity for chip companies, but it’s one that requires a lot of flexibility and an ability to recognize a few lessons still being learned by enterprise CIOs. For starters, developers/makers are your customers and partners; open is better than closed; and hardware and software are both delivered as a service, so the real money is in services. The challenge will be figuring out how to serve those developers and building a business with margins more aligned to lower-priced devices and profits. Because there’s a heavy R&D investment chip firms have to make, balancing the protection of IP while also trying to embrace openness may offer some interesting business opportunities. In short, bringing everyday physical objects online is going to shake up the chip industry in a major way. Bringing it back to Intel So what’s happening in the Intel earnings isn’t as simple a story as Intel getting beat on mobile as mobile (and thus lower power) became more important, nor as nuanced as Intel’s data center story as end customers become both more demanding and more concentrated. Intel’s challenge in the internet of things will be directed at building an array of products with each optimized for different variables at price points that make sense for tiny devices. Or it will move into software and services associated with the enterprise and its premium chip products. This last strategy has so far been more successful. As a chipmaker Intel’s been engineered to produce a lot of one type of chip in great volumes better and more cheaply than anyone else. Its strength may also be its downfall as the market looks for greater variation and customization. But its CEO is clearly pondering the role its fabs play in Intel’s future as it seeks to manufacture chips for other customers. Given its decades of research into manufacturing, that’s a powerful asset Intel may yet use to ride the wave of connected devices at the hardware level.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The internet of things: a market landscapeCloud computing’s impact on chip and hardware designThe living room reinvented: trends, technologies and companies to watch

Read More...
posted 1 day ago on gigaom
Music subscription service Spotify is getting ready to switch its data delivery technology from P2P to a server-client model, according to a TorrentFreak report. Spotify has long been using P2P for its desktop client, but not for mobile and web listening, and it makes sense that the company is looking to streamline its data delivery as mobile usage grows and bandwidth prices continue to decline. With the shift, Spotify is also closing the book on a little-known part of its past: uTorrent creator Ludvig Strigeus started working for Spotify after he sold his company to BitTorrent Inc. That sale was facilitated by none other than Spotify CEO Daniel Ek, who briefly served as uTorrent’s CEO.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Bitcoin: why digital currency is the future financial systemHow the consumer space battled licensing issues in the fourth-quarter 2013How to compete with Facebook in 2013

Read More...
posted 1 day ago on gigaom
Jasper Wireless has quietly become a force in the internet of things, brokering and managing many of the connectivity agreements – including AT&T’s increasing number of connected car partnerships – that link appliances, gadgets and vehicles to mobile networks. On Wednesday, Jasper announced it has raised a $50 million round led by the government of Singapore’s investment arm Temasek Holdings, and according to the Wall Street Journal, the funding raises its valuation to $1 billion. Mobile connectivity has mainly been a big factor in the industrial internet of things, where shipping, trucking and many other industries have long used machine-to-machine connectivity. But Jasper’s M2M technology is gradually creeping into the consumer realm, in particular the connected car.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The influence of M2M data on the energy industryConnected world: the consumer technology revolutionMobile Operators’ Strategies for Connected Devices

Read More...
posted 1 day ago on gigaom
Every now and then, the war that traditional media entities seem to be continuously fighting over reader comments — where they should be placed, how they should be managed and even whether they should exist at all — erupts into the open. This time around, the spark was an announcement earlier this week that the Chicago Sun-Times has eliminated the ability for readers to comment, while it tries to think of a way to handle them that won’t result in “an embarrassing mishmash of fringe ranting and ill-informed, shrill bomb-throwing.” The Sun-Times is just the latest to make this decision — some, confronted with the same choice, have ultimately decided not to have comments at all, or to allow Facebook to manage them. Popular Science was the most recent publication to do away with them entirely, a decision the magazine said was influenced by research that showed comments can negatively influence how readers perceive research. The Huffington Post, meanwhile, recently ruled out anonymity. The consensus among many of those who vote against comments — including a number of bloggers like TechCrunch writer-turned-VC MG Siegler — is that they add virtually no value, and that anyone who wants to comment can turn to Twitter or Facebook, or publish a critical take on their own blog. In other words, comments are unnecessary. But I think this is fundamentally wrong. Social media doesn’t fill the gap I’ve argued here a number of times that comments have value, even if they are filled with trolls and flame-wars, and also that anonymity and pseudonymity also have value — even if outlets like the Huffington Post choose to attribute all of their problems to those features. There is a long tradition of pseudonymous commentary in the United States in particular, especially when it comes to politics, and even Facebook CEO Mark Zuckerberg seems to have loosened his views on whether “real identities” are required for some social activity. As I tried to point out in a Twitter discussion about this topic with journalism professor Jay Rosen (who says he is agnostic when it comes to the subject of comments), Josh Benton of the Nieman Journalism Lab and a number of others — a conversation I have embedded below — I don’t think it’s enough to say that we can afford to do away with reader comments because Twitter and Facebook exist. In many ways, that’s just an abdication of responsibility. It’s true that much of the commentary on blog posts and news stories occurs on Twitter and Facebook, and probably Instagram and Snapchat for all I know. And there’s no question that social tools have eaten into the market for old-fashioned blog comments — even at Gigaom, we’ve noticed a decline over the past few years, in all likelihood because people have moved to other platforms and comments are no longer the only method for providing feedback. @mathewi The number of people who use a comments section but don't use Twitter/FB/blogs can't be very significant.— Scott Smith (@ourmaninchicago) April 16, 2014 That said, however, I think there are a number of risks involved in handing over the ability to comment to Twitter, Facebook and other platforms. As I argued in a separate debate with Scott Smith — who wrote a blog post arguing that we shouldn’t mourn the decline of comments — one of the dangers is that if your engagement with your readers occurs solely through these platforms, then they effectively control that relationship in some crucial ways. Smith argued that Facebook was “just the microphone,” but it is more than that: It’s the microphone, the hall, the electricity and even the town. Doing a service for readers Another risk is that journalists — who might be held to account for mistakes, or provided with additional useful information about a story or a point of view, which is one of the major benefits of two-way or multi-directional journalism — will cherry-pick the responses they wish to see on Twitter or Facebook, and miss others. It’s easy to say that you will follow up with everyone on every social platform, but it’s another thing to do so. Not only that, but handing everything over to social networks also diminishes one of the other major benefits of having comments, which is that everyone can see at a glance which journalists are interacting and which aren’t — and what their responses are. Sure, you could find out all of that by searching Twitter and Facebook and every other platform, but it would take a long time. Why not provide readers with that ability in a single place, right next to the content itself? Rosen and others argue that many bloggers and journalists respond via email, which is undoubtedly true. But there again, there is little to no transparency to those conversations (although some who use this method, including Andrew Sullivan of The Daily Dish, are good at publishing both the emails and their responses). News orgs: I can understand killing comments for lack of resources (human/financial). But stop blaming commenters, OK? This is on you.— Dan Gillmor (@dangillmor) April 15, 2014 But for me, one of the biggest criticisms of doing away with comments is that too many sites are throwing the baby — and a potentially valuable baby — out with the bathwater, without trying to come up with a solution or spend any time fixing them. Anil Dash has argued that if a site has a comment section that is filled with trolls and bad behavior, the responsibility for that lies with the website owner, because he or she has failed to spend the time necessary to improve the environment there. Why not try to improve them instead? As I’ve pointed out before, there are a number of interesting experiments going on with comments, including the “annotations” that Quartz has — which appear next to the paragraph they refer to, and were inspired by the way that Medium handles comments, which can also be attached to an individual section. Comment-software maker Livefyre just announced a new version that adds much the same ability to websites, instead of lumping comments at the bottom of a page. Even the New York Times has experimented with something similar. @jayrosen_nyu @jbenton @mathewi @dangillmor @jswatz I think not having comments may be rational, but is an immense missed opportunity.— Aram Zucker-Scharff (@Chronotope) April 15, 2014 There are a number of sites that have shown the potential value of comments — and not just individual blogs, like that of Union Square Ventures partner Fred Wilson, but sites like Techdirt. Founder Mike Masnick has turned his often-turbulent comment section into the foundation of a true community, and one that not only provides feedback but is a crucial part of his membership-based business model. It wasn’t even that hard, he says. Gawker’s Nick Denton has bet the farm on Kinja, the discussion platform that turns every commenter into a blogger — and is even prepared to take commenters and turn them into paid staff. For me at least, too much of the complaining about comment sections and the decision to do away with them seems to be driven not by the bad behavior in them, but by a lack of interest on the part of some journalists and media outlets ing engaging with readers at all — and the hope that if there are no comments, maybe there won’t be any way to see the mistakes or call them to account. Post and photo thumbnails courtesy of Flickr users Tony Margiocchi, as well as Jeremy King

Read More...
posted 1 day ago on gigaom
A small Austin startup called M87 thinks we would all have a better mobile data experience if we’d just share our phones’ 4G connections with one another. Apparently Qualcomm agrees with them. M87 has closed a $3 million Series A round of funding, which included new strategic investors Qualcomm Ventures and Chinese data center hosting provide 21Vianet along with M87’s original angel investors. M87 founders (from left): VP Marketing Matt Hovis, CEO David Hampton, Chief Research Officer Vidur Bhargava and CTO Peter Feldman. M87 sprang out of the University of Texas’s wireless engineering department after developing a crowdsourced connectivity technology that allows nearby phones to link up via Wi-Fi and use each other 3G and 4G connections to the mobile network. The technology is similar to the crowd mesh-networking technology developed by another emerging networking startup Open Garden, but rather than offer it to consumers, M87 wants to sell it to carriers so they can link their subscribers together. At first glance, you’d think carriers would be against having their customers share connections, since selling individual data plans is their bread and butter. But M87 has developed a way for customers to share their radios with nearby users without dipping into their own data plans and without compromising their security. M87′s crowdsourced networking technology (source: M87) I took a detailed look at M87’s technology in my original profile of the company in December, but in short, M87 wants to turn every mobile phone into a node on the mobile network and send every data packet through the most efficient node. Such a setup could dramatically increase a carrier’s overall 4G overall capacity and ensure users get fast connections speeds even when the wander into the “dead zones” of the network. As for Qualcomm, the company invests in a lot of networking startups, but M87’s work bears some resemblance to peer-to-peer wireless networking the silicon giant is developing in-house. Qualcomm is a big proponent of a new mobile standard called LTE Direct, which uses LTE radios to connect two nearby devices directly rather than use the mobile network as intermediary. If Qualcomm were to combine its own LTE Direct efforts with M87’s crowdsourced connectivity technology it could create extremely dense and constantly morphing LTE networks that penetrate into the furthest recesses of buildings and other hard-to-reach areas. Source: QualcommRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014How to manage mobile expenses in a BYOD worldHow new devices, networks, and consumer habits will change the web experience

Read More...
posted 1 day ago on gigaom
U.S. Sen. Al Franken has written to Netflix asking its opinion on Comcast’s efforts to buy Time Warner Cable, implying that Netflix is a good indicator of the potential consumer and content harms of the deal. In his letter, Franken touches on peering challenge, noting that Comcast implied that it was no big thing in its hearing before the Senate Judiciary committee. Since Netflix Connected consumer first-quarter 2013: Analysis and outlook

Read More...
posted 1 day ago on gigaom
It all started out innocently enough: Think of a frugal party host trying to save on libations. As mass adoption of smart devices became a cultural phenomenon, people began using personal mobile technology in their work environments and “bring your own device” (BYOD) was born. The BYOD acronym is a legitimate byproduct of the consumerization of IT, which is a complicated issue, particularly for actual IT professionals. Now the BYO-canon is assimilating the cloud. Is BYOC the next big thing? Cloud networks offer us many modern conveniences: music, apps, CRM, storage and phone systems. These networks work remarkably well when managed by qualified personnel. The tricky part, then, is striking a balance between a well-managed and controlled network and employing services that are pleasant for end users. Companies that gather resources into a smaller number of cloud services have the most regulatory capabilities. The RingCentral cloud phone system, which includes access to Google Drive, Box, Dropbox, Salesforce.com and Microsoft applications, offers an example of how cloud services enable employees to work how they want. The administrator is the gatekeeper of the service, so employees have the freedom to work the way they like. The good news is that ultimately the system and data is available at the discretion of the company, a practical solution as we move into the age of BYOE, or bring your own everything. Learn about RingCentral cloud phone solution: a complete communication service with rich feature set, integrations, mobile app that’s easy to use and manage.

Read More...
posted 1 day ago on gigaom
It’s been a month since Newsweek “outed” bitcoin’s creator, Satoshi Nakamoto, as none other than Dorian Satoshi Nakamoto. That declaration was met with immediate skepticism from the bitcoin community and an outright denial from the model train-loving man they identified. Now, research from Aston University in the U.K. has identified a possible new creator of the original bitcoin paper: Nick Szabo, a well-known digital currency blogger and creator of bit gold, which was seen as a precursor to the bitcoin system. He also received a law degree from George Washington University in 2006, according to the Wall Street Journal (but reports that he was also a professor there are false). Szabo was an “uncanny” match to the original bitcoin whitepaper, said the team’s leader, Dr. Jack Grieve, in a statement. “Our study adds to the weight of evidence pointing towards Nick Szabo. The case looks pretty clear-cut. Szabo is an expert in law, finance, cryptography and computer science. He created ‘bit gold,’ a precursor to Bitcoin, and was looking for collaborators in 2008. Did Nick Szabo create Bitcoin? We’re not sure, but we think he probably wrote the paper so it’s certainly worth a closer look,” said Grieve in the release. The team from the university’s Center for Forensic Linguistics looked at the writing of 11 candidates, all formerly rumored Satoshi Nakamotos. In addition to Szabo and Newsweek’s Dorian Nakamoto, the researchers also analyzed Hal Finney, Gavin Andressen, Jed McCaleb, Vili Lehdonvitra, Dustin Trammel, Michael Clear, Shinichi Mochizuki, Wei Dai and the team of Neal King, Vladimit, Oksman and Charles Bry. The study showed that Szabo was “by far” the closest match out of the 11 compared after the team matched linguistic traits from the paper with Szabo’s blog posts: This includes the use of: the phrases “chain of…”, “trusted third parties”, “for our purposes”, “need for…”, “still”, “of course”, “as long as”, “such as” and “only” numerous times, contractions, commas before ‘and’ and ‘but’, hyphenation, ‘-ly’ adverbs, the pronouns ‘we’ and ‘our’ in papers by a single author; fragmented sentences following colons and reflexive (-self) pronouns. It’s not the first time either that a linguistics analysis has matched Szabo to the original whitepaper. Researcher Skye Gray ran an analysis in December 2013 that also identified Szabo as the possible creator of bitcoin. Gray also noticed the repeated use of “Of course,” “for our purposes” and “trusted third parties.” Gray’s analysis does delve a little further and mentions Nakamoto’s use of British spellings like “favour” instead of “favor,” although those can be easily swapped out by someone trying to be anonymous. It’s important to note that the study only identifies Szabo as the possible author of the paper itself — at least out of that group of eleven — and that he’s been on the top of people’s lists for a long time. In a 2011 WIRED article, Szabo denied being the founder and instead pointed to Hal Finney or Wei Dai (who then both denied it in the same piece). A Forbes reporter found Hal Finney, the first person to receive a bitcoin from Nakamoto, a few weeks ago living in the same neighborhood as Dorian S. Nakamoto, but that was apparently just a weird coincidence. The Aston University research might solidify Szabo as frontrunner as the author of the paper, but that doesn’t mean he’s the creator of the bitcoin system. It’s long been a theory in the bitcoin community that Satoshi Nakamoto might be a group of people. If that’s the case, Szabo may not be bitcoin’s creator, but one of them.

Read More...
posted 1 day ago on gigaom
On Wednesday, Google released its earnings statement for Q1 of 2014, reporting revenue of $15.4 billion, up 19 percent year on year, and an increase in net income. But the numbers come in just shy of analysts’ expectations, which projected the company to rake in $15.58 billion. Google’s earnings-per-share came in at $6.27 — roughly five cents off from analysts’ projections of $6.33. According to Google, traffic acquisition costs increased to $3.23 billion in the first quarter of 2014, representing 23 percent of advertising revenues. Mean while, cost-per-click continues to go down, decreasing 9 percent over the first quarter of 2013 and remaining steady throughout the year, although the number of paid clicks on its advertising network jumped 26 percent compared to the same period last year. This earnings report also calculated Google’s purchase of Nest for $3.2 billion, as well as the sale of Motorola to Lenovo  for $2.91 billion – both of which happened in January of this year. “Motorola had a great quarter in Q1, with the Moto G showing strong sales momentum, especially in emerging markets,” said Google CFO Patrick Pichette on the company’s earnings call. “The team continues to be hard at work, and we look forward to seeing them join up with Lenovo soon.” This article was updated to include content from Google’s earnings call.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Listening platforms: finding the value in social media dataManaging infinite choice: the new era of TV user interfacesConnected Consumer Q3: Netflix fumbles; Kindle Fire shines

Read More...
posted 1 day ago on gigaom
Joining the number of already available ways to access a computer from a mobile device, Google released Chrome Remote Desktop for Android on Wednesday. With the free app, found in the Google Play Store, you can remotely connect to and control a Microsoft Windows PC or Mac OS X computer on your Android phone or tablet.   So why, then, is the app called a “Chrome” remote desktop tool? Google is using Chrome as the framework for the connection; you’ll need the Chrome browser installed on the remote computer in order to connect to it through your Android device. Google takes the same approach with the Chrome extension that lets a Chromebook remotely control a traditional computer. And that Chrome framework is part of a larger strategy for Google to boost engagement on non-Google platforms. While the app works on any Android phone or tablet running Android 4.0 or better, it’s much better suited for a larger screened device. Viewing a full desktop computer interface on a small handset display is less than ideal, although it could be handy in a pinch.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A look back at the third quarter of 2013A demographic and business model analysis of today’s app developerIs Android broken and if so, will Google fix it?

Read More...
posted 1 day ago on gigaom
If you mainly use your Kindle to read ebooks, you may be unaware that the device is also an excellent document reader — simply send an email with a document to a specific Amazon email address and it will appear on your e-reader. On Wednesday, Amazon sent an email to Kindle users informing them all documents sent to Kindle are now stored on Amazon Cloud Drive, in a folder labeled “My Send to Kindle Docs,” even documents sent before the cloud drive integration. Previously, documents sent to Kindle were converted to .mobi format, but now those docs are stored in their original format. It’s pretty nifty and allows you to send a document to your reader and make a cloud backup accessible from the browser at the same time.

Read More...
posted 1 day ago on gigaom
We’ve added a new member to the team: Kif Leswing is our new mobile writer based in New York. Kif jumped into the mix right away on Monday, noting how Google’s distribution strategy so far with Google Glass mimics that of high-end fashion brands (even if it’s a pretty ugly product) and taking a look at Microsoft’s new Office subscription plan. Mobile computing is the workhorse of this generation, and we’re lucky to have a strong team here at Gigaom covering technology that has gone from the fringe to the mainstream in just seven years. Kif will focus on writing about the smartphones, tablets, and apps that enable the mobile world, joining Kevin Tofel, who is also looking at the development of wearable computing, and Kevin Fitchard, who tracks the wireless networks that make all of these devices compelling. Kif comes to us from Gizmodo and Wired, and is probably the only Gigaom writer who has ever booked KRS-One for a gig. He’s a graduate of Oberlin College and former lacrosse player for “the perennially winless Yeomen,” as he put it. Please welcome Kif (pronounced like the proper way to pronounce GIF). You can follow him on Twitter @kifleswing and contact him here.

Read More...
posted 1 day ago on gigaom
When Google’s Street View cars drive up and down the streets of a town, they don’t just collect images. They also log addresses, which helps match Street View with Maps. It’s much faster to have a computer do the matching, so Google relies on artificial intelligence to pick out address numbers and decide what they mean. Street View addresses cracked by the algorithm. Photo courtesy of Google. Google said today that its address recognition will get a boost from another development–an algorithm that can crack Google’s version of the CAPTCHA, known as reCAPTCHA, with more than 99 percent accuracy. While humans don’t have too much trouble picking out characters that are jumbled, computers have a very hard time deciphering where one letter ends and the next begins. Google’s new algorithm comes much closer to a human level of recognition. As artificial intelligence and internet bots have improved, the longstanding and very annoying CAPTCHA system has come under strain. Vicarious, an artificial intelligence company that received $40 million in funding in March, built an algorithm last year that was able to crack any type of CAPTCHA with an accuracy of at least 90 percent. Its effectiveness came from its ability to pick out characters even when they were squished together or overlapping. But Google says not to worry. It used its findings with the algorithm to improve reCAPTCHAS not by making them more difficult, but by incorporating other factors that analyze whether it is a human or a bot on the other end. It has actually made reCAPTCHAS clearer as a result. And the algorithm is now busy crunching address numbers for the Street View team.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why we must apply big data analytics to human-generated dataSponsored Research: How story-driven video is poised to take offA look back at the first quarter of 2014

Read More...
posted 1 day ago on gigaom
The Royal Canadian Mounted Police have arrested a 19-year-old man who allegedly used a notorious security flaw to attack the country’s tax agency and steal data from 900 Canadians, it announced Wednesday. The RCMP said in a statement that they have arrested 19-year-old Stephen Arthuro Solis-Reyes and charged him with two hacking-related charges over an incident that led the Canadian Revenue Agency to shut its website for five days. “The RCMP treated this breach of security as a high priority case and mobilized the necessary resources to resolve the matter as quickly as possible,” said an official. The incident was one of the most high-profile security breaches related to Heartbleed, a two-year old flaw related to the software that many companies and governments use to encrypt their website data. Security professionals reported the flaw two weeks and provided a patch, but it remains unclear how many people knew about or exploited the bug. In the case of the Canadian tax agency, the man grabbed 900 Social Insurance Numbers, a form of tax ID that is akin to the Social Security Numbers used by Americans. The RCMP statement says the investigation is still on-going and that Solis-Reyes is to appear in court in Ottawa in July.

Read More...
posted 1 day ago on gigaom
A few days back my friend Pip Coburn, who runs an investment advisory service, and his colleague Brynne Thompson asked me to discuss what I have learned about media after spending nearly 12 years on Gigaom, pretty much most of my working life in various aspects of media, and two decades on the internet. It turned out to be a fun conversation that was shared by Pip and Brynne with their carefully curated email list of friends and clients. After going over it, I thought, why not create an abbreviated version and share it online? Media is not publishing alone My definition of media? “Anything which owns attention.” This could be a game, or perhaps a platform. Ironically, the media tends to associate media with publishing — digital or otherwise — which in turn is too narrow a way to consider not only the media but also the reality of the competitive landscape and media-focused innovation. Photo by Maksym Yemelynov/Thinkstock Media continues to be under the influence of deflationary forces of the internet. Whether it is through stock-market trading or the sale of hotel rooms, the internet has a way of bringing deflationary forces to all businesses that were hitherto inefficient and involved many middlemen. There are two major deflationary forces in digital media that are disrupting business models: The “ruthless efficiency” of advertising on the internet: highly targeted demand. The endless inventory available on the internet: overwhelming supply. The “ruthless efficiency” includes the role of programmable ad exchange and the ability of brands to more accurately target an audience with newer and better tracking possibilities, including the increasing amount of social data we typically share with social web platforms such as Facebook, Instagram, Pinterest and Twitter. We are heading into a future where advertisers can buy traffic at much lower prices. Both forces are deflationary and will need a complete rethink of the business models of the more traditional media companies. Traffic, writers & analytics Some media companies that rely on advertising revenue are tying journalist compensation to the traffic their story generates. It doesn’t work because it de-prioritizes writing. Writing works when publications are writing and serving the best interest of their users; numbers are good yardstick but not a way to compensate a person. Tools like Chartbeat are like mile-markers but they are not complete arbiters. The tendency to adapt behavior and business strategy to this data is becoming far more predominant within the industry, and that is a mistake. Tony Haile, CEO of Chartbeat, reminded me of this quote from Andrew Lang, a Scottish poet: “An unsophisticated forecaster uses statistics as a drunken man uses lampposts — for support rather than for illumination.” Building a business over time with content that is less ephemeral than stalking celebrities requires more skill and the ability for the writers to generate insight and the publishers business’ to support what generating insight takes. Photo by Thinkstock/wx-bradwang Fake traffic and bots rule A few weeks ago, Haile wrote about the challenges facing internet publishing wherein he outlined that nearly 55 percent of people are spending less than 15 seconds on a page. (They analyzed 2 billion pageviews generated by 580,000 articles on 2,000 sites, according to Haile.) I don’t think that is feasible. Other people in the business agree that a lot of the traffic on the web is bot traffic, so all this traffic people talk about is faux traffic. Is a page being auto-refreshed on an open tab in your browser really useful “attention?” I don’t think so. There are many more examples of this worthless traffic. No one talks about it. No one really wants to dig in to find out what’s real and what’s not. Plausible deniability is a wonderful thing for politics and advertising. There’s always been a level of ambiguity in the advertising business and nothing really has changed. What could be the next successful model? Everyone is trying to figure out what the next model is, but it’s not here yet. There are glimpses of the future. For instance, Foursquare can provide the underpinning of the new version or future iteration of what Bon Appetit or Gourmet currently provide. Instagram and its 200 million monthly active users are participating in a new kind of transmission (like television). Twitter should be at the forefront of this, but there is lack of clarity on part of the company. I have some ideas and am trying to flush them out. In searching for the next sustainable business model or media company, the company needs to be great at “owning attention” and the company must be very clear about what it stands for. What are you doing and for whom? Most publishing companies in particular cannot say what they are and what purpose they serve. When I started Gigaom (the company), I wanted to turn my blog into a service that helped make complex ideas simple. And that philosophy is reflected in our events and our decision to have a subscription-based research business, which in turn has led us to a business model that is less influenced by pure traffic figures.

Read More...
posted 1 day ago on gigaom
Time Warner Cable has turned on the Hotspot 2.0 capabilities across its public Wi-Fi network, letting customers with newer smartphones or tablets connect to its 33,000-node wireless network without entering passwords or dealing with login screens. Time Warner VP of Wireless Products Rob Cerbone confirmed to Gigaom that it has upgraded the majority network with Hotspot 2.0 software, and its broadband customers have been connecting to it since the end of March. Hotspot 2.0 is a technology designed to make public Wi-Fi work like cellular networks by automatically recognizing and connecting devices that have permission to access any given access point. Typically consumers trying an ISP or carrier’s Wi-Fi network have to go through a login portal on their web browsers or download special connection software, limiting the hotspots’ appeal to consumers, especially those connecting with mobile devices. A Ruckus Wireless Wi-Fi access point similar to those used in TWC’s network (source: Ruckus) Hotspot 2.0 has actually been around for quite a while — the Wi-Fi Alliance began certifying devices two years ago under its Passpoint program — but carriers and ISPs have been slow to adopt it. Hotspot provider Boingo began offering it to its customers in February, but on a limited basis in 21 airports, making Time Warner’s launch the first large-scale implementation of Hotspot 2.0 in the U.S. Time Warner is looking at Hotspot 2.0 differently than a carrier would, Cerbone said. While mobile operators are looking to offload data traffic from their cellular networks, Time Warner doesn’t have a mobile network. Wi-Fi is more a means to give its cable customers access to broadband connections outside their homes, which is why it has focused its hotspot efforts in key markets in its cable territory. Today its Wi-Fi systems are concentrated in commercial businesses and heavily trafficked outdoor locations in Southern California, New York City, Austin, Charlotte, Kansas City, Myrtle Beach and Hawaii. Time Warner’s Los Angeles WI-Fi network Many of Time Warner’s customers weren’t accessing the network for two reasons, Cerbone said. To use it customers had to specifically search out and log into to hotspots, and because of the nature of portal-based authentication, those connections were inherently insecure. Hotspot 2.0 largely eliminates both obstacles. After logging in for the first time with a Passpoint-certified device (Here’s a full list), devices will automatically discover and connect to TWC’s entire hotspot network. And those links will be encrypted using WPA2 security. Basically, Time Warner customers will get the same kind of experience they get on their home Wi-Fi network. “This was a good opportunity to get a secure connection to our customers,” Cerbone said. The network has only been live for a few weeks, but Time Warner has already seen a 15 percent boost in users who have never accessed the network before suddenly connecting to Wi-Fi, Cerbone said. The Big Picture Time Warner could do a lot with a Wi-Fi network that behaves like a cellular network. For instance, Time Warner could use it to create a “Wi-Fi-first” mobile carrier like French ISP Iliad has done with its Free Mobile service. TWC could use its hotspots and residential Wi-Fi to bear most of the data and voice traffic load and filling in the gaps with cellular connectivity through a wholesale agreement with a traditional mobile carrier. In its merger filing with the FCC, Comcast said it is investigating the possibility of just such a mobile service with Time Warner, citing it as a reason for regulators to let the deal go through. According to Cerbone, TWC has no plans today to launch a competing mobile carrier using Wi-Fi. In fact, he said that Time Warner is perfectly content with its cross-selling partnership with Verizon Wireless. But Cerbone pointed out that the new Hotspot 2.0 capabilities would be highly useful for customers who buy their mobile service from an independent Wi-Fi-first carrier such as Scratch Wireless or Republic Wireless. Both virtual operators keep their prices low — in some cases, free — by leaning heavily on Wi-Fi. TWC’s network gives them a lot of Wi-Fi to play with, Cerbone said. The CableWiFi network is focused on big cities, providing lots of metro capacity but leaving huge coverage gaps (Source: CableWiFi) Time Warner is, however, looking into ways to expand the scope and scale of its Wi-Fi offering, Cerbone said. As other wireless ISPs upgrade their networks to support Hotspot 2.0 — and eventually Next Generation Hotspot (NGH) technology — brokering useful roaming agreements will become far easier, allowing TWC to expand its Wi-Fi footprint beyond its core cable territories. Time Warner is also part of the CableWiFi consortium of five major cable operators that pools hotspots. Comcast confirmed to FierceWireless last week that Hotspot 2.0 is on its roadmap. Once it and other cable providers upgrade their networks, Time Warner’s 33,000-node Hotspot 2.0 network could turn into 200,000-node network, encompassing most major cities in the U.S.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014How new devices, networks, and consumer habits will change the web experienceWhat to watch in mobile in 2013

Read More...
posted 1 day ago on gigaom
Selfie app Frontback — which allows users to take advantage of a phone’s front- and rear-facing cameras to switch together two-shot photos to share with friends – released its app for Android on Wednesday. The iPhone app has been around for a year and just surpassed its millionth download in March. The company says that the Android app  includes identical features to the iOS version, but also has a new feature called “Offline Mode,” which allows users to take pictures without an internet connection.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why mobile must be part of the shopping experienceWhat We Can Learn From comScore’s Year in ReviewHow to utilize cloud computing, big data, and crowdsourcing for an agile enterprise

Read More...
posted 1 day ago on gigaom
It seems like all of the newest camera apps are coming with a way to refocus the image after the fact and now Google is getting in on the game as well. The company introduced Google Camera for Android on Wednesday in the Google Play Store that includes a new Lens Blur feature. Technically, Lens Blur is a way to add depth of field or bokeh to an image but the end result looks very similar to picture refocusing features found in other camera apps, as Google explains in the blog post announcement: “Unlike a regular photo, Lens Blur lets you change the point or level of focus after the photo is taken. You can choose to make any object come into focus simply by tapping on it in the image. By changing the depth-of-field slider, you can simulate different aperture sizes, to achieve bokeh effects ranging from subtle to surreal (e.g., tilt-shift). The new image is rendered instantly, allowing you to see your changes in real time.” The new Google Camera does this by creating a depth-map of each image using an algorithm.   By using the mapping data and simulating a thin lens, the software lets the user adjust focus and blur in real-time using a slider. The new camera app also includes existing modes for Panorama and 360-degree Photo Sphere images. Surprisingly, Google Camera doesn’t take the place of the native Android Camera app; at least it didn’t on my Moto X when I installed it. And taking a picture in Lens Blur mode is a multi-step process. First you take a normal picture, keeping your subject centered. Then Google Camera instructs you to raise your phone or tablet while still keeping your subject in the center; likely for additional depth information. After that when looking at the result, you’ll need to wait a few seconds for image processing; this took about 10 seconds on my phone. Finally, you can then drag a slider to add or remove blur to your photo, even choosing your own focal point. Here’s a quick example from my Moto X:   The new app is available for Google Android phones or tablets running Android 4.4 or better.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to manage mobile security through productivityHow the consumer space battled licensing issues in the fourth-quarter 2013Sponsored Research: How empowering workers enhances business communication

Read More...
posted 1 day ago on gigaom
Google will release a modular smart device in January 2015, according to the the leader of Project Ara, Paul Eremenko. This is the first time Google has given an estimate of when its ambitious smartphone hardware platform will go on sale. Details about Google’s modular phone are coming out of the Project Ara developers conference taking place at the Computer History Museum in Mountain View Wednesday. The conference is technical and there’s a lot to process for developers. But we can glean a few more facts about Google’s customizable smartphone. The core of Project Ara is a barebones piece of hardware called a Endo. An Endo doesn’t need to be a fully functioning phone; the first planned Endo, called a “gray phone,” will only include Wi-Fi, a processor, screen and battery, and will have an estimated production cost of around $50. The release for the gray phone is tentatively planned for January 2015, which is also when Google plans to start selling the modules that expand hardware functionality. There is a high-end model with a production cost of $500 planned, as well as multiple sizes, eventually. We’ve also got a good idea about how the modules will be standardized: in 20mm “blocks.” A prototype revealed on stage had room for two different sizes of module: two blocks by one block, and two blocks by two blocks. Project Ara is developed by Google’s Advanced Technology and Projects group, which was the piece of Motorola Google kept after its sale to Lenovo. While it may be an experimental project, the team hopes to have a viable product in under two years using a project management framework borrowed from DARPA. Dieter Bohn over at the Verge has a stellar examination of the self-imposed constraints the three-person team is working under. But no amount of military-inspired hacking could hide that Project Ara still has a lot left to be done. Modules attached to the prototype on stage were connected through clips, instead of the fascinating electromagnetic system previously announced. The prototype on stage yesterday did not even boot fully. Android does not yet support Ara-style hardware.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A look back at the first quarter of 2014How to utilize cloud computing, big data, and crowdsourcing for an agile enterpriseBitcoin: why digital currency is the future financial system

Read More...
posted 1 day ago on gigaom
Teens and younger adults are a special kind of currency for social media sites: high concentrations of the under-24 demographic is often seen as a sign of longevity or potential (think SnapChat or Tumblr), while a decreased number can lead some to believe that the platform is struggling to keep up  (see: Facebook). San Francisco-based Photo-sharing platform We Heart It — which operates like a mix of Instagram and Pinterest and emphasizes the communication of feelings — is hitting it big with teen girls. Four out of five of the site’s over 25 million users are under, and more than 70 percent are female. The demographic makeup of We Heart It is probably the most fascinating thing about the service, created in 2007 in Brazil but only incorporated in the U.S. in 2011.  It focuses on strong visual imagery and emotions, but strips out comments — and the potential for negative interactions. That high concentration of young women is apparent on the platform, where pictures of quotes and landscapes are  broken up with shirtless pictures of Zac Efron or swirly notebook drawings. We Heart It is now trying to turn its success with teenagers into a full-blown revenue stream with Collections, a service it released from beta Wednesday that allows users to group photos together into a package and share them. For example, the company held a Collections contest entitled “Spring Escapes” in March that encouraged users to put together a collection on what vacation means to them. Substitute that sentiment for a collection photos of how its user base consumes Nutella or wears the latest fashions from Abercrombie and Fitch, and the company has fashioned a hyper-specific way for brands to connect with their target demographic, and We Heart It gets revenue from the campaigns. But success with teen girls doesn’t mean success with a broader audience, and when asked, We Heart It CEO Ranah Edelin was vague with how he plans to bring his social platform out of that coveted niche and hit a broader mainstream audience. Edelin, who was on the ground floor at music service Rhapsody, says that the company is focusing on opening up those revenue streams as a way to get to a broader audience. The platform remains a fascinating case study in how to get into the minds of teen girls, and if it manages to turn that special currency into actual currency, then it can sustain the social platform for its young audience. Featured image from Kateryna Yakovlieva/ShutterstockRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why mobile must be part of the shopping experienceWhat We Can Learn From comScore’s Year in ReviewHow to utilize cloud computing, big data, and crowdsourcing for an agile enterprise

Read More...
posted 1 day ago on gigaom
Twitter’s finished being a rebel, at least when it comes to standing up for a James Dean fan who is being sued by a celebrity licensing company that wants to claim the fan’s @jamesdean account. Despite Twitter’s earlier claims that the account, which consisted of quotes and photos of the late Hollywood bad boy, did not violate its trademark policy, the company quietly suspended the account sometime in the last few weeks. The dispute came to light in February on reports that Indiana-based CMG Worldwide was suing Twitter to learn the identity of @JamesDean, who had been tweeting tributes like the one below since 2009: CMG Worldwide filed the lawsuit late last year, claiming that the @jamesdean account infringed on federal trademark laws and Indiana rights of publicity. “We looked at it as a positive sign that as the litigation moves forward, Twitter has suspended the site. No, there isn’t any judgement yet,” Mark Roesler, CEO of CMG, stated via email. Twitter, which has a reputation for defending its users in court, did not respond to repeated requests for comment, meaning it’s unclear if it has agreed to tell CMG who ran the @jamesdean account. Should the dead have publicity rights? The case is important because the outcome could limit how people use historical and fictional characters as part of their social media accounts. It also raises policy questions about the wisdom of extending rights of publicity — which are separate from copyright — to dead people. In contrast to states like New York, which doesn’t recognize a posthumous right to publicity, CMG’s home state of Indiana awards 100 years of protection. It’s unclear how such laws, which typically are used to protect physical products like masks and other merchandise, apply to Twitter and other online realm — and to what degree CMG can enforce Indiana’s law beyond the border of that state. Some lawyers are skeptical about the efforts of CMG, which also asserts rights to figures like Jackie Robinson and Bettie Page, and whose website says “then, now and forever” to describe its intellectual property services: “With posthumous rights, what’s really bizarre is that publicity rights grew out of privacy rights – this notion that someone has a privacy right after you’re dead is odd,” according to intellectual property attorney, Jonathan Band. Others are concerned about the potential harm to free speech of expanding these laws. “The real implication is for artistic expression,” said Ken Paulson of the First Amendment Center at Vanderbilt University, noting that Andy Warhol built his career on celebrity images. Paulson is also skeptical of awarding property rights where none existed before, and where there may be no moral or economic justification for doing so. “The broader question is how does society benefit from ensuring that James Dean’s great-great-grandson earns money from his likeness? Why build a system that would allow that to happen?” he said, noting that the heirs of figures like Daniel Boone or Davey Crockett don’t appear to be short-changed by their likeness being public. Roesler of CMG justified the expanded rights on the grounds that dead celebrities can be akin to commercial brands that are entitled to long-term protection. “With certain personalities, you can develop a brand – Walt Disney, James Dean – that go well beyond their lives,” he said, adding that, in the case of Dean, “We don’t want every use, just the official Twitter handle.”Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How mobile will disrupt the living room in 2014How LinkedIn is evolving its media businessNoteworthy mobile developments from the third quarter 2013

Read More...
posted 1 day ago on gigaom
Netflix will open up shop in Germany in September, according to a report from Germany’s Curbed that quotes “multiple people with knowledge of the process.” The company has been working on an advertising campaign to run in big cities in the country to introduce its service to prospective German customers, according to Curbed. Netflix first announced last year that it plans a major expansion into continental Europe in the second half of 2014, but the company hasn’t said yet which countries it is targeting for that expansion, or when exactly it wants to launch. A Netflix spokesperson declined to comment when asked about this latest report, but the information unearthed by Curbed matches chatter I have been hearing about Netflix buying advertising in anticipation of a launch in Germany. And in January, Netflix was looking to fill spots on its European PR team, with applicants being told that “Dutch, the Nordic languages, German and French are a plus.” Netflix launched in the Nordic countries in 2012, and expanded to the Netherlands in 2013. Image courtesy of (CC-BY-SA) Flickr user gfhdickinson.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A look back at the first quarter of 2014How to utilize cloud computing, big data, and crowdsourcing for an agile enterpriseHow mobile will disrupt the living room in 2014

Read More...
posted 2 days ago on gigaom
When you’re as big as Facebook is — with over a billion users worldwide and a stock-market value of more than $150 billion — it would be tempting to just sit back and watch the money roll in. But co-founder and CEO Mark Zuckerberg is doing the exact opposite: he is busy thinking of ways to disrupt his own success, as a way of figuring out how Facebook can adapt to a mobile world full of fragmented social experiences like Instagram and Snapchat. Zuckerberg talked to New York Times technology writer Farhad Manjoo about that and some other topics (including turning 30, a question he mostly ignored) during a recent interview. The piece is headlined “Can Facebook Innovate?” — which seems a little odd, given that Facebook has launched at least half a dozen new apps and services in the past year or two. As I’ve argued before, Facebook is one of the few large companies that seems to have taken Steve Jobs’ approach to heart: namely, the need to disrupt yourself before others do so (as Apple did with the iPhone and iPad). It’s true that most of Facebook’s experiments have failed to set the world on fire, but that doesn’t make them not innovative. Innovation also means trying and failing. The Great Unbundling of Facebook One of the big themes that comes out of the interview is that Facebook is taking a completely different approach to mobile than it has on the desktop. You could call it “The Great Unbundling” — the process of taking discrete parts of the monolithic social network and breaking them down into individual apps, such as Instagram. As Zuckerberg put it: “On desktop, where we grew up, the mode that made the most sense was to have a website, and to have different ways of sharing built as features within a website. So when we ported to mobile, that’s where we started — this one big blue app that approximated the desktop presence. But I think on mobile, people want different things… In mobile there’s a big premium on creating single-purpose first-class experiences. So what we’re doing with Creative Labs is basically unbundling the big blue app.” Creative Labs is the group responsible for Paper, the Facebook news-reading app that barely even looks like it comes from Facebook at all, and one that — at least in my anecdotal surveys of friends and social connections — has overtaken use of the official Facebook app for some people. And Zuckerberg has hinted that more such apps will be coming, along with apps created by extracting bits of the official app, such as the Facebook Messenger app. One thing that becomes clear during the Manjoo interview is that Zuckerberg sees many of these experiments as just that: experiments that could take years to show any meaningful results, if they ever do. And while he didn’t put it in so many words, Facebook also has the luxury of having a massive business to support those experiments, which gives the company a lot of runway. “The other thing that is important context to keep in mind is that, to some extent, most of these new things that we’re doing aren’t going to move any needles in our business for a very long time. The main Facebook usage is so big. About 20 percent of the time people spend on their phone is on Facebook. From that perspective, Messenger or Paper can do extremely well but they won’t move any needles.” Different levels of experimentation Zuckerberg also outlines how he thinks of Facebook’s business in a structural sense, as a series of pieces in a kind of pyramid. First there’s the main app: “A billion people or more are using it, and it is a business.” Then there are things like Instagram, WhatsApp, Messenger and Search: “They will probably be the next things that will become businesses at Facebook. But you want to fast-forward three years before that will actually be a meaningful thing.” Then there are things like Home and Paper that are coming from the team at Creative Labs, Zuckerberg says — bets that are a lot longer-term: “Maybe in three to five years those will be in the stage where Instagram and Messenger are now. So what we want to do is build a pipeline of experiences for people to have. It would be a mistake to compare any of them in different life cycles to other ones. They’re in different levels.” As he has mentioned in other interviews (something I wrote about here), Zuckerberg also seems much more open to the idea that these smaller mobile experiences could involve various forms of anonymity or pseudonymity — something Facebook has always been opposed to in the past: “One of the things that we’re trying to do with Creative Labs and all our experiences is explore things that aren’t all tied to Facebook identity. Some things will be, but not everything will have to be, because there are some sets of experiences that are just better with other identities. I think you should expect to see more of that, where apps are going to be tied to different audiences.” Post and photo thumbnails courtesy of Flickr user Jason McElweenie and AP ImagesRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Survey: How apps can solve photo managementThe state of the converged-mobile-messaging marketFlash analysis: Is Twitter on the cusp of building a business?

Read More...
posted 2 days ago on gigaom
Watch out, Crackle, there’s a new kid in town: Sony’s ad-supported streaming service got some competition this week from Tubi TV, a new streaming app from the San Francisco-based connected TV startup adRise. Tubi TV is already available on Amazon’s new Fire TV, and plans to launch on Roku and Xbox 360 in the coming days. Tubi TV’s ambitious goal is to become the largest library of free movies and TV shows, adRise  founder and CEO Farhad Massoudi said during an interview earlier this week. At launch, Tubi TV will have more than 3,000 titles licensed from partners like the U.K.’s iTV, Endemol, Hasbro and Cinedigm. In the next six months, the company plans to grow Tubi’s catalog to 20,000 titles. Netflix subscribers will recognize some of the titles, while others haven’t been available on other streaming services yet. adRise Head of Bizdev Thomas Ahn Hicks told me that Tubi isn’t in the business of licensing exclusive content, but that the company’s existing relationships with content providers — adRise has been building connected TV apps for Starz, Hasbro and others — has helped to get access to a wide library of content. So why would a studio or production company that has its own apps also want to distribute its content through Tubi TV’s app? Massoudi said that the connected TV space is increasingly getting crowded, with hundreds of apps competing for a viewer’s attention. Bundling all the free and ad-supported content in one app, while also promoting the content of each studio, could help to solve that issue, he argued, adding that Tubi wanted to become the “first stop after Netflix.” Hicks agreed, and said that Tubi could be another option for users who already have Netflix. “This is really a complement to what’s out there,” in regard to existing subscription offerings, he said.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The living room reinvented: trends, technologies and companies to watchConnected consumer first-quarter 2013: Analysis and outlookWhat the shift to the cloud means for the future EPG

Read More...