posted 4 days ago on gigaom
The world of business is changing, as are the locations of the people who are driving that business. How companies reach new users and how they treat them once they do will be the defining business issue of the future. Those who deliver the best user experience to a global audience will win this race will change the internet as we know it. Improvements to the customer experience come in many forms — it could be a new mobile app, it could be data-driven proactive outreach and troubleshooting, it could be cross-platform messaging. Whatever tactic you choose to improve the user experience, the bar will be set and the stakes will be raised for expectations. How will this force change the internet? An improved customer experience rapidly turns negative when it’s unavailable or slow. The internet was architected with reliability in mind. Speed and performance were second-class citizens to availability (rightfully so). But the business impact of slow is very real. One Google study found that a half-second decrease in speed equated to a 20 percent drop in traffic. For customers, slow is the new downtime. The longer distance the information must travel between a customer and the infrastructure enabling the experience, the slower the experience. This latency can have as much, if not more, of an impact on the responsiveness and speed of a customer’s experience than the bandwidth of the connection the user uses to access the internet. The Internet community addressed this with the deployment of content delivery networks (CDNs) — dense pockets of infrastructure in common internet exchange locations caching static content. This strategy has worked well for the last 10 years, piggybacking on the inherent reliability in the architecture of the internet. The next 10 years will see technologists pushing further out to the edges of the internet. It is getting crowded In search of competitive advantage, businesses will demand faster user experiences. There are, however, a number of macro trends driving slower experiences as the internet expands. The first macro trend is found in the most densely populated parts of North America and Europe, where urbanization is straining internet infrastructure. To reach new users, many operators are deploying more backhaul networks. This will work for a while, but investing in new backhaul networks is largely a duct-tape solution. The Middle East, Africa, and Asia Pacific lead the charge in internet growth. The unique geographic, economic and public infrastructure attributes of these regions will challenge network operators and CDNs in delivering the same performance enjoyed in North America and Europe. Many of these users in these regions will be mobile first due to sheer economics, and their first encounters with the Internet will be through wireless instead of wired broadband connections, bringing its own set of internet performance challenges. To deliver exceptional, responsive and reliable user experiences to both new mobile first users and congested urban users alike, while not breaking the bank, horizontal architectures will be required, and infrastructure will be deployed closer to global users. Rather than a single location in a major metropolitan area, savvy technologists are not only deploying their applications to each major city, but are leveraging multiple deployments per city to better deliver exceptional experiences at scale. Today cloud vendors are providing multi-region infrastructure deployment options at the click of a button, and cloud-agnostic management frameworks are enabling sane management of global multi-vendor solutions. The barriers to horizontal scale have never been lower, and more technologists are taking advantage than ever before. The future How will all of this impact the internet? We will see less growth in city-to-city “backhaul” traffic and investment, and more growth in diverse investment closer to users. Inter-location investment will be dwarfed by intra-location investment. We’ll spend less energy building wider pipes from city to city, and more energy building efficient pipes inside cities, and then focusing on the control plane intelligence to keep bytes local to users. Along the way, our architectures will become more resilient to disaster and more cost efficient to scale, providing added benefits. Cory von Wallenstein is the chief technologist at Dyn, where he leads the technical vision on internet performance for customers like Twitter, Netflix, Etsy and over 100 of the Alexa 1000. Follow him on Twitter @cvwdyn.  Feature image from Thinkstock/Alexaldo.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How new devices, networks, and consumer habits will change the web experienceHow operators can manage the signaling storm in 2013How to deliver the next-generation web experience

Read More...
posted 4 days ago on gigaom
When it comes to discussing large-scale Infrastructure as a Service, the discussion typically begins and ends with Amazon Web Services but the $25 million just raised by Firehost shows that there is interest in other offerings, especially if they can offer differentiated services — in  Firehost’s case, it claims advanced, baked-in security. The new, Series E round was led by The Stephens Group — which has been involved in the last 4 rounds –and brings total funding in Dallas-based Firehost to $59 million. Customers include the Clinton Foundation, Duke University, Farmers Insurance and the ACLU. Firehost CEO Jim Levandowski The infusion will be used to speed up product development and to boost sales and marketing “at a time when companies desperately need to raise he bar on data protection, compliance and risk management,” new Firehost CEO >Jim Lewandowski, said via email. Levandowski last month succeeded Chris Drake, who is now CTO. “Simply put, compliance with standards like PCI and HIPAA provide a false sense of protection, as we have seen with many recent security breaches of household name companies. Other IaaS providers consider security an afterthought, or avoid it altogether. If you ask yourself what the cloud provider of the future should look like, it can’t be what you see today,” he noted. In other IaaS news, developer favorite Linode has dropped $45 million for a  massive upgrade to core infrastructure, installing new SSD drives, doubling RAM, updating to the Intel Ivy Bridge E5-2680v.2 processors and 40Gb networking — all of which will be made available to customers as part of a free upgrade. According to the blog post announcing the changes: All new Linodes will be created exclusively on the new Linode Cloud, using the new plan specs and on the new hardware and network. Likewise, existing Linodes can upgrade free of charge via the “Pending Upgrades” link on your Linode’s Dashboard, however there are some temporary availability delays while we work through getting hundreds of more machines in the pipeline: This news, along with Amazon’s decision to segregate but keep running its oldie-but-goodie EC2 instances, raises an interesting point about what cloud provides do with aging infrastructure. Do they simply forklift customers to the newer gear — not easy without live migration capabilities — or do they  just keep them churning away ad nauseam on  gear that is less energy-efficient and less secure as time goes by and point all new workloads to the latest stuff? Stay tuned, this will be a topic of discussion at Gigaom’s Structure show in June. And don’t forget the Structure Show, OpenStack edition On this week’s cloud podcast hear from Jonathan Bryce and Mark Collier why the latest OpenStack “Icehouse” release is all about (well maybe not ALL about) users. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The Structure 50: The Top 50 Cloud InnovatorsHighlights from AWS re:Invent 2013How to resolve cloud migration challenges in physical and virtual applications

Read More...
posted 4 days ago on gigaom
Health care’s integration with information technology remains inconsistent as of today. Supercomputers helped fuel the genomics revolution, which was a critical success for the health care space. On the other hand, the transition to electronic medical records has been a promise never fully realized. The rapid adoption of wearables, though, leaves little doubt that electronics is poised to make a large impact on health and medicine. As Gigaom Research analyst Jody Ranck writes in his forthcoming report on health care and the internet of things, sensors and other electronics will drive tremendous innovation in medical devices, building off the current momentum in fitness and wellness devices. Much of this development is centered on making devices ever smaller, from ingestible sensors in the form of pills to nanowires and lab-on-a-chip technologies. This focus on miniaturization is no surprise. Silicon electronics has relentlessly followed Moore’s Law for the last 40 years, exponentially decreasing the size of a transistor. But in health care, smaller is not always better. Human beings are large and many things we want to measure, like blood pressure or muscle movement, require larger-scale sensors. This is where silicon — an element so critical to the development of computing that its name adorns Valleys, Alleys and other centers of IT innovation — starts to falter. Silicon is plentiful on Earth, but in the purified crystalline form required for semiconductors it is only cheap because many chips can be packed into a single wafer, enabling smaller devices. If larger sensors are carved out of a wafer, the economics aren’t nearly as favorable. Since, as mentioned above, smaller doesn’t necessarily mean better in health care, the key to fully realizing the promise of information technology may be to move electronics beyond silicon. John Kymissis, a professor of electrical engineering at Columbia University, thinks he has the answer. Prof. Kymissis, who runs Columbia’s Laboratory for Unconventional Electronics (CLUE) has been researching thin film semiconductors. Like the name suggests, thin film semiconductors are created by depositing a thin layer of electronics onto other materials. So instead of starting with a perfect wafer of silicon and carving out tiny transistors, you can pick a material that has certain desirable properties — like plastic or glass — and add the electronics on top. And in fact most of us benefit from thin film semiconductors every day in the displays of our smartphones, tablets and HDTVs. But beyond the display market there are promising applications for future development. I met Prof. Kymissis at NYC Media Lab’s inaugural Geek of the Month gathering where he discussed projects in this lab that are trying to advance the state of the art in thin film semiconductor systems. Many of these projects focus on health care applications because, as Prof. Kymissis puts it, “Engineering is about solving problems and health care folks have the biggest problems.” One interesting solution CLUE is working on involves using a material called piezoelectric polymer as a substrate. Piezoelectric systems generate an electric charge in response to physical stress. This is the same the principle behind how cigarette lighters ignite (or push-start buttons on gas grills, to pick a slightly more healthy example). The researchers at CLUE have added transistors to a thin flexible polymer film that responds to different strains, creating a microphone array that captures information about sound waves in a new way. One application of this is to measure the pressure wave inside the ear as part of a system that will aid in the placement of cochlear implants. Can I bend your ear?Image courtesy John Kymissis, CLUE Another material CLUE is investigating is electrostrictive polymer. This works in the reverse way of piezoelectric polymer: With the connection of an electric charge, the material will contract and bend like a muscle. Potential applications in prostheses and robotics are further off but one of the major hurdles is the fact that it requires approximately 500 volts to make the polymer bend. The team at CLUE is using transistors on the polymer to act as switches that control the movement at a much more reasonable 30 volts or so. This would open the door to building this technology into devices or other uses. Thin film semiconductors still need work in important areas to realize their full potential. The technology can’t yet support wireless power transfer so for all the magic in the devices they still require a wire, which limits many use cases. You also can’t yet put a radio on a thin film semiconductor, which leaves them outside the truly connected world of healthcare IoT. But there is plenty of research going on at CLUE and other labs around the country on these very topics. So while silicon isn’t going away it’s likely that thin film will continue to open up new applications to increase the positive impact of electronics in healthcare. Ken Andersen is vice president of operations for Gigaom Research. Thumbnail image courtesy of MauMyHaT/ThinkstockRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Analyzing the wearable computing marketHow to utilize cloud computing, big data, and crowdsourcing for an agile enterpriseThe influence of M2M data on the energy industry

Read More...
posted 5 days ago on gigaom
Cloud providers Google, AmazonWeb Services (AWS) and Microsoft are doing some spring-cleaning, and it’s out with the old, in with the new when it comes to pricing services. The latest cuts make it clear there’s a new business model driving cloud that is every bit as exponential in growth — with order of magnitude improvements to pricing — as Moore’s Law has been to computing. If you need a refresher, Moore’s Law is “the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years.” I propose my own version, Bezos’s law. Named for Amazon CEO Jeff Bezos, I define it as the observation that, over the history of cloud, a unit of computing power price is reduced by 50 percent approximately every three years. I’ll show the math below, but if Bezos’ law reflects reality, the only conclusion is that most enterprises should dump their data centers and move to the public cloud, thus saving money. Some savings occur over time by buying hardware subject to Moore’s Law, plus the fixed cost of maintenance, electrical power, cooling, building and labor to run a data center. In the end, I’ll show how prices are reduced by about 20 percent per year, cutting your bill in half every three years. How we got here Google was first to announce “deep” cuts in on-demand instance pricing across the board. To make the point that cloud pricing has been long overdue, Google’s Urs Hölzle showed in March just how much cloud pricing hasn’t followed Moore’s Law: Over the past five years, hardware costs decreased by 20 to 30 percent annually, but public cloud prices fell by just 8 percent annually: Slide from Urs Hölzle’s keynote at Google Cloud Live, March 25, 2014 Having watched AWS announce, by my count, 43 price cuts during the past eight years, the claim of merely a 6 to 8 percent drop for public cloud seems off. (That would be a 2 percent reduction 43 times to get an 8 percent trend line.) Nevertheless, applying a Moore’s law approach to capture the rate of change for cloud, one would hold constant the compute unit, while the gains are expressed in terms of lower price. Thus, Bezos’s law is the observation that, over the history of cloud, a unit of computing power price is reduced by X percent approximately every Y years. A bit of digging on Amazon’s Web Services blog shows how Amazon determined the percentage in computing power (X) and time period (Y) on May 29, 2008. The data from 2008 and the Amazon EC2 Spot Instances on April 1, 2014, shows that in six years, similar compute instance types have declined by 16 percent for medium instances and 20 percent for extra-large instances. Assuming a straight line, the pricing would have tracked as follows: AWS cloud price reduction Year Price Reduction Comment 20% 2008 $0.800 $0.640 2009 $0.640 $0.512 2010 $0.512 $0.410 2011 $0.410 $0.328 3 years, 50% reduction 2012 $0.328 $0.262 2013 $0.262 $0.210 2014 $0.210 3 years, 50% reduction from 2011 April 1, 2014 $0.210 6 years, 75% reduction from 2008 For the AWS public cloud, X = 50 percent when Y = 3 years, supporting my claim: Bezos’ law is the observation that, over the history of cloud, a unit of computing power price is reduced by 50 percent approximately every three years. What’s next Clearly, cloud, as opposed to building or maintaining a data center, is a much better economic delivery approach for most companies. And how can an enterprise datacenter possibly keep up with the hyper-competitive innovation from Amazon, IBM, Google and Microsoft? Enterprising tech pros know how this is going to play out. They’re way ahead in asking: “Why should we continue to saddle our company with a huge cost anchor called a datacenter or private cloud?” It looks as though being a cloud provider isn’t going to be like a retail business when it comes to profits, but it may be too early to tell. It’s a bit like the x86 server business IBM recently sold to Lenovo. There will likely be innovation above the core cloud platform for a long time, which might alter the profitability outlook. Opinions aside, the math doesn’t lie. It’s not question of if we’re moving to the cloud but how — and when. Greg O’Connor is CEO of AppZero, which specializes in migrating enterprise software applications to and from cloud computing services. Follow him on Twitter @gregoryjoconnor. Feature image illustration adapted from Steve Jurvetson/Wikimedia CommonsRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Understanding and managing the cost of the cloudWhat you missed in cloud in the third quarter of 2013What developers should know when choosing an MBaaS solution

Read More...
posted 5 days ago on gigaom
Believe it or not, an awful lot of people still buy CDs. When digital music purchases surpassed physical music sales in 2011, one would have thought that the world had forever changed and no-one was buying CDs anymore. Then it was all about on-demand music services like Spotify, Rdio, Beats Music and Google Play’s All Access. When it was reported that digital music sales dropped in 2013, many thought this was the beginning of the end of digital music sales. Looking at the International Federation of the Phonographic Industry’s annual Digital Music Report for 2013, physical format music sales still account for 51 percent of revenue whereas digital revenues, which includes both purchased and on-demand subscriptions, account for only 39 percent worldwide. So if you happen to be part of the group that is still buying physical CDs, the following guide will show how to manage your music library and review your options for storing your music in the cloud: Curating your music files Digitizing CDs - When preserving the highest quality music possible is important to you, consider using X Lossless Decoder for Mac, or XLD as it is often referred to. XLD supports both FLAC and Apple Lossless formats and has the ability to output a binary CD image with cue sheet. The equivalent on Windows is Exact Audio Copy (EAC), which is also free. Both tools utilize AccurateRip to compare the digitized results of your music files to the output that others have created in order to ensure the most accurate and consistent rip possible. Improving album information - Sometimes the album information associated with your music files is not what you want it to be. While iTunes for both Mac and Windows can get edit your music information, I have found TuneUp ($49.95), which also has a Mac and Windows version, to be a great assistant when cleaning up my music library’s album information. Finding album art - Cover Scout, for $29.99 on Mac, can help you find missing album art as well as upgrade your existing album art to a higher resolution image. On Windows you can use the free version of Album Art Downloader to increase the quality of your music files’ album art. Renaming music files - Maintaining a sensible structure to your digitized library will make managing your files that much easier. PublicSpace’s A Better Finder Rename ($19.95) for both Mac and Windows will allow you to set your own rules to rename your music files based on the embedded album information. Finding duplicates - It happens to the best of us; when managing thousand of music files, you are bound to have a duplicate song here and there. MacPaw’s Gemini ($9.99) for Mac and Hardcodes Software’s dupeGuru Music Edition (Free) for Windows will help locate and eliminate your duplicates. Choosing which cloud to store your music Once your CD collection is digitized you will have to choose which cloud based music library to use. The traditional cloud based options include Apple’s iTunes Match, Google Play Music, and Amazon Cloud Player. Then there are also personal cloud options like Synology Disk Station Manager’s Audio Station that also allow you to access your digitized music library from anywhere on the internet. Which one is best? It depends on a few key differences: Cost of cloud based music storage Google’s All Access service is the most expensive of the three subscription based services at $9.99 each month. All Access is an on-demand service like Spotify, giving you instant access to millions of songs. You do not have to use this service to store your music online. You can still upload 20,000 songs to your Google Play music library, for free. With a relatively small amount of free storage to start out with, Amazon Cloud Player gives you space for only 250 songs for free. Upgrading to the Premium edition for just $24.99 a year will let you upload 250,000 songs. Apple on the other hand will only store your iTunes purchases in iCloud for free. Upgrading to iTunes Match for $24.99 a year will allow you to upload 25,000 additional songs. Synology’s Audio Station does not have any subscription pricing nor any limits to the amount of music you can store. You do have to buy the network attached storage device which will cost anywhere from $150 for a budget friendly one-bay device, up to $600 for a business class four-bay device. You will need to supply your own hard drives when purchasing the diskless versions of each product. Shopping for and buying music online Selling only digital music, iTunes has pretty much remained the same since it first opened up back in 2003. The experience has become a bit crowded over the years as Apple has added Movies, TV, Shows, Audio Books, Books, Podcasts, iTunes U and of course Apps to the same simple interface. Like Apple, Google’s Music Store also sells digital music only. On iOS the web-based access you have to go through to buy music from Google is hardly worth the effort. Amazon does sell both physical CDs as well as digital music. Just like purchases of digital music from Amazon’s MP3 store, any physical CD that you comes with Amazon’s AutoRip feature does not count towards your storage limit. AutoRip is available on certain physical CD purchases through Amazon and will automatically add the MP3 version of your album to your music library before the physical CD even ships. Adding music to your library in the cloud As you add music to your iTunes music library, iTunes Match will scan your music files and attempt to match it to music that Apple already has stored in their catalog. Once matched, Apple will upgrade your music to a higher quality 256kbps version of the song. Uploading music to your Google Play library is accomplished by downloading and installing Google’s Music Manager app. This app will only allow you to upload and download your music files. Similar to Google, Amazon has its own Amazon Music Importer that you use to upload your music files. With Synology you can mount the device as a drive on Windows or Mac and just copy your music files over to the music folder on the device. Initial setup is a bit involved, but once you have everything set up, your transfer speeds on your local network will be much faster than your upload speeds over the internet.   Mobile music playback experience Apple of course ships its own iTunes music app with every iOS device that can play music from iTunes Match library. Google has its Google Play Music (Free, iPhone), Amazon has its Amazon Cloud Player (Free, Universal, and Synology has their DS audio (Free, Universal) app. For the most part, all four music players have very similar capabilities. They can each sort your music by Artist, Album, Song or Genre, they all support Playlists playback, and they all can search for music in your cloud library for you to stream or download. Google is the only one that does not have a native iOS app for the iPad. gMusic 2 ($1.99, Universal) can rectify this omission and is a pretty decent replacement for Google’s own iOS music app. Device free streaming and casting All four of the iOS apps can stream music directly from the device to either a Bluetooth enable speaker or an Airplay enabled device and they each support playing music in the background. Snology can stream musicdirectly to one or more Airplay, DLNA or Chromecast devices. You can even remotely control your home music system from anywhere on the internet. With Apple’s Remote app for iTunes, you can play music stored on a computer running iTunes Home Sharingover any Airplay enabled device. In order to stream music directly from a iTunes Match library to an Airplay device, you will need an Apple TV. This is similar to playing your Google Play library directly on Chromecast or your Amazon Cloud library directly on Fire TV. Use a combination or build your own For most people, Google’s free storage of 20,000 music files will be more than enough space to store you music library online. When it comes to buying new music, simply use Amazon and buy physical CDs that qualify for AutoRip and upload them to Google Play. When it comes to playing your music you can always sync your Google or Amazon library and automatically add songs to iTunes on your computer. That way you can take advantage of any Apple TV, Airport Express and any Airplay enabled speakers or audio components you may have. If you have a really large music library, use either Amazon and store 250,000 songs online, or you buy a Synology device. Just keep in mind how long it will take you to upload 250,000 music files to Amazon. Not only will Synology be faster on the uploads, but its ability to play back on AirPlay, DLNA and Chromecast is a major plus.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The evolution of consumer-media cloud storageConnected world: the consumer technology revolutionHow mobile will disrupt the living room in 2014

Read More...
posted 5 days ago on gigaom
I recently got my hands on more than 1.3 million tweets, all mentioning bitcoin or creator Satoshi Nakamoto, spanning the entire month of February 2014. My goal was to get a sense of who’s actually interested in bitcoin (enough to tweet about it, at least) and to see how activity on Twitter tracked with big news stories. Here’s what I found. Click on any image for a larger version. (Disclaimer: I’m neither a statistician nor a programmer, so I used relatively simple tools for analysis and worked with companies on other aspects. Gnip, which is now part of Twitter, supplied the data. I used Chartio’s cloud-based analytics service for much of the quantitative analysis and some of the visualizations. I used Google Fusion Tables for the Marc Andreessen graphs, Tableau for the analysis of news content and AlchemyAPI analyzed the Mt. Gox tweets for sentiment.) Who’s tweeting about Bitcoin All told, 333,144 unique accounts posted messages related to bitcoin. But not all accounts are created equal. Some are clearly spam accounts or other types of bots — thousands had posted between zero and one tweets — while others seemingly constantly churn out the latest Bitcoin rates and new items. Here are the top 10 most-active accounts for February. Most active accounts (number of tweets) Username Number of tweets BTCNewsticker 16,136 AllThingsBTC 15,013 bit8coin 8,790 Cryptogeeks 7,832 ProjectCoin 7,587 forexrelvo 7,176 BTC_LTC_Xchange 6,480 alt_bit_coins 6,237 Regarding BTC 5,018 AlertCoin 4,837   Here are the 10 usernames that received the most @ mentions. Not surprisingly, bitcoin news specialist Coindesk takes the cake. Most of these numbers are actually higher (some much higher) because tweets often reference multiple accounts but the data format made it difficult to include those tweets in the count. Most @ mentions (not tweets w/ mentioning multiple users) Username Number of mentions Coindesk 13,203 Bitcoin 6,929 BitCoinReporter 5,587 BTCNewsTicker 4,925 baconbkk 4,491 BitcoinVOX, ShareThis 4,310 Forbes 4,026 aantonop 3,869 ConanOBrien 3,447 ProjectCoin 3,366   Some were one-hit wonders with massively retweeted posts, such as talk show host Conan O’Brien. Wow. Strippers get angry if you make it rain Bitcoins.— Conan O'Brien (@ConanOBrien) February 17, 2014 And someone called Bacon Bangkok. if you wanna buy Bitcoins easily and safely here is the best place localbitcoins.com/?ch=17z8— Bacon Bangkok (@baconbkk) January 28, 2014 I filtered out one account that’s still active but generated in about an hour more than 4,400 spammy retweets of a post, now removed, about opening a free bitcoin wallet. (That message accounts for the spike in tweets on the evening of Feb. 8, which is shown below.) Marc Andreessen gets a lot of bang for his buck I thought it was also worth looking at the most-active verified accounts to get a sense of which (presumably) respected individuals and publications are tweeting the most about bitcoin. Venture capitalist Marc Andreessen (@pmarca) topped the list in February. Most active verified accounts (number of tweets) Username Number of tweets pmarca 145 binarybits 84 businessinsider 78 ForbesTech 70 MarketWatch 67 paulvigna 66 Forbes 64 AsheSchow 62 CNNMoney 58 kashhill 53     It shouldn’t be surprising that Andreessen gets a lot of mentions — more than 3,600 (including those including other users) — which is impressive compared with his mere 145 tweets. Here’s an interactive graph (and here’s a link to it) showing the vast network he’s created. Small yellow nodes represent tweets mentioning Andreessen and someone else, and blue nodes are the accounts doing the tweeting. Andreessen is the large yellow node, which connects with who has mentioned mentioned him alone and who he has mentioned. As you can see, though, the network of users to whom Andreessen actually replies or tweets directly is significantly smaller. Where is bitcoin (possibly) popular Here are the top Twitter time zones for tweets, which should be taken with a grain of salt, of course, given the number of spam accounts and accounts with either no time zone listed or false time zones. Also, these numbers are for aggregate tweets, not the number of unique accounts tweeting from each time zone. For example, the more than 15,000 tweets from AllThingsBTC account for about 20 percent of tweets from the London time zone. Breaking it down by users’ specified locations, things look a little different. You’ll notice the myriad ways of saying New York City, for example, and Cryptogeeks representing #Bitcoin #Litecoin #Altcoins.   If you wonder what this looks like visually, here’s a relatively small sample of user locations mapped using Google Fusion Tables. It’s not entirely accurate — Fusion Tables appears to try to place everything even if it’s not a real location — but it gives a sense of how global Bitcoin is. The Mt. Gox meltdown While the above charts are high-level info about who’s tweeting, any analysis of bitcoin in February isn’t complete without analyzing the demise of popular — and original — bitcoin exchange service Mt. Gox. Its death throes are represented in these timelines. The bump on Feb. 7 corresponding to Mt. Gox’s official announcement that it was temporarily suspending bitcoin withdrawals. Feb. 10 is the date Mt. Gox extended its suspension on withdrawals and sent the price of Bitcoin plummeting. The evening of Feb. 24 is when news broke that Mt. Gox had “lost” about 750,00 bitcoins worth about $375 million at the time, and tweets skyrocketed the next morning. Mt. Gox filed for bankruptcy on Feb. 28. Here’s how it looks by the hour. Notice the aforementioned spam spike on Feb. 8. Not surprisingly, while the media really began picking up on the Mt. Gox meltdown after its late-night announcement on Feb. 6, chatter about withdrawal problems and the imminent demise of Mt. Gox had been growing on Twitter. The roughly $100 decrease in bitcoin value on Mt. Gox in one day on Feb. 5 didn’t help the cause. @BitcoinOnReddit The rumour is MtGox's ponzi is about to fall through and the owners may be brought up on charges – it's def time to sell— ReginaldConwayIII (@ReginaldConway) February 06, 2014 On Feb. 4, Swedish Pirate Party founder Rick Falkvinge reported that Mt. Vox had already racked up more than $38 million in unfulfilled withdrawals (i.e., Bitcoins left users’ accounts but never made it to the users). Dear #redditors, here's the submission of the MtGox problems for your upvote liftoff: reddit.com/r/Bitcoin/comm…— Rick Falkvinge (@Falkvinge) February 04, 2014 Overall, sentiment about Mt. Gox seemed to follow the news pretty closely. In the chart below, the blue line represents negative tweets, the yellow line represents positive tweets and the purple line represents all tweets that day about bitcoin. On Feb. 25, negative tweets mentioning Mt. Gox represented more than a quarter of all tweets about bitcoin. Bring in the journalists When journalists finally do get wind of a story, their articles tend to spread pretty well. Overall, the there were more than 247,000 “unique” URLs shared in February, and they were shared more than 1.02 million times. I analyzed the top 10,705 URLs, which ended up being anything shared at least 12 times. Of those, 1,328 (well less than 1 percent of the total unique URLs) came from — in some way, shape or form — 25 technology and general-purpose news sites that I chose to examine. They were shared 121,931 times, which accounts for about 12 percent of all sharing activity for February. (A note about URLs, though: It’s difficult to put an exact number on unique URLs because so many are syndicated or shared via RSS, Google or other social sites that alter the URL in some way. I shortened the cells in Excel to 100 characters, which seemed like a good way to catch cells with multiple links and still cut off a good number of single-URL cells before the social tagging kicks in.) Long story, short: Total shares is a more-accurate measure than unique URLs for assessing the popularity of a publication. Running this same analysis with the top 44,810 URLs, for example, results in a lot more URLs for each publication but minimally higher total shares. Now watch what happens when we include Coindesk, a news site focused on crypto-currencies. Check out the interactive version of this chart here. Still, though, news sites are no match for spam messages and links to Bitcoin wallets, exchanges, monitors and other types of non-news sites. The top 15 of all shared URLs were for such sites and accounted for 76,534 total shares. You can examine them here. This article from Wired (well an earlier version, the URL of which appears to have been replaced) was the most-popular news URL, with 1,404 shares.. Ode to Satoshi And amid all those tweets, I found this. Enough said.

Read More...
posted 5 days ago on gigaom
The beauty of open-source software projects is the passion they generate from so many people. The drawback of open-source software projects can be that same sort of passion. with so many developers and so many vendors involved, there is often a proliferation of projects that can confuse the issue — and customers. That’s what happened around the whole OpenStack and PaaS situation. That’s one topic that OpenStack COO Mark Collier and Executive Director Jonathan Bryce take on in the latest Structure Show podcast. OpenStack is the four-year old cloud framework initially pushed by NASA and Rackspace but is now backed by a huge array of companies from CloudScaling and Mirantis to IBM, HP, Cisco, you-name-it.  But various partisans keep adding new projects and capabilities, which is what happened with the whole Project Solum mess. Solum was a PaaS-like project pushed initially by Rackspace that seemed to muddy the water about whether OpenStack was basic infrastructure or would also evolve into a higher-level PaaS. right now there is already an open-source PaaS option in Cloud Foundry, backed by many of the same vendors as OpenStack itself, so there was some confusion. OpenStack Foundation COO Mark Collier sought to clarify this with us last week , noting that just because someone sparks up a project, doesn’t mean that project will be incorporated into OpenStack. “There’s some confusion around projects that are floating around outside OpenStack that are related to but not part of OpenStack — and may never be a part of OpenStack,” he said. “When we looked at things related to OS including Solum, it’s a very long list. The way the process works is a small subset[of these projects]apply for incubation and if approved there’s an 18 month or longer phase [before they can go  into] the  integrated release. Solum has not even applied for incubation and I’m not sure if they will.”   Given that Rackspace, a big Solum backer, has since signed on for Cloud Foundry, this issue may evaporate. Or not. For more on what’s up with OpenStack, including the biggest features of the new Icehouse release, check out our full Structure Show podcast with Collier and OpenStack Foundation Executive Director Jonathan Bryce. And don’t forget, we’ll discuss how OpenStack, CloudStack and Eucalyptus private cloud deployments will fit into the overall landscape at Structure in June.        OpenStack Foundation COO Mark Collier (L) and Executive Director Jonathan Bryce (R)Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What you missed in cloud in the third quarter of 2013Cloud computing 2013: how to navigate without a mapExamining open hybrid cloud options for the enterprise

Read More...
posted 5 days ago on gigaom
Nike has let go most of the members of its FuelBand wearables unit and will stop developing new versions of the device, according to a report by CNET. The tech news site quoted an unnamed person “familiar with the matter” as saying that the firm has laid off 70 to 80 percent of its 70-person hardware division and will not produce a new version of the FuelBand, but will continue to sell the existing version. Some industry observers believe Nike has decided to align itself with Apple, which is expected to launch an iWatch or other wearable device that might run Nike’s Fuel software. Apple CEO Tim Cook is a member of Nike’s board.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Analyzing the wearable computing marketGigaom Research predictions for 2014How to manage the customer experience through mobile apps

Read More...
posted 5 days ago on gigaom
For a generation obsessed with selfies and taking pictures of any meal set before us, mobile photography has become second nature. We reach for the closest camera — normally our phones — and take photos of everything from sunsets to coffees, posting them to Instagram to see how many likes we can get. Snapwire, a Santa Barbara, Calif.-based startup emerging from beta mode, wants to connect the new photography generation with clients who need their talents and are willing to pay for it. “This new generation of photographers did start out on mobile and they are the ones who have the most passion for their photography. They’re the ones who are assembling in San Francisco to do Instameets and to really get that validation on the photographs,” said Chad Newell, founder and CEO. “So we built Snapwire to give them the opportunity to sell their work, the ultimate validation really.” Snapwire’s homepage invites users to search for an image from their growing library or to create their own request. Photo courtesy of Snapwire The site has already attracted more than 8,000 users and 50,000 image uploads — not to mention the eye of some bigger advertising clients like Denny’s who are looking for images — and they’re now working to secure another round of funding. Snapwire is different from normal stock photography websites in that it specializes more in creative collaboration and crowdsourcing — think 99 Designs more than Shutterstock. Anyone looking for an image — whether it’s Denny’s wanting a late-night diner shot or a mobile app looking for a brand image of strangers falling in love — can post a creative request and assign a price based on their budget. Photographers can then upload photos to match or ask questions to the requester, similar in style to TaskRabbit. Screenshots of the mobile app show a list of requests, a sample request for a photo and a photgrapher’s profile. Photos courtesy of Snapwire The buyer goes through and nominates finalists, which in turn rewards the photographer with points, before purchasing their final photograph under a royalty-free license. Snapwire then collects 30 percent of the earnings with 70 percent going to the photographers themselves, who also get to keep the copyright. Snapwire also has the option of selecting some of the images to add to its standalone stock photography library that anyone can search and buy from (although the pricing and payouts are different). The advantage of the request system is how well it works in mobile. A photographer can open the app and see that someone needs an artsy photo of coffee and take a quick photo next time they stop for a cup of joe. But to avoid flooding the marketplace with every image of a coffee cup ever taken, photographers first have to submit a portfolio of four photos to be approved by the Snapwire staff. Once approved as a photographer in the app, users can earn points when their images are purchased and move up in the site’s photographer rankings, essentially gamifying it too, Newell said. And the site isn’t strictly mobile either. About 30 percent of the images still come from DSLRs and can be uploaded through the main site or through Dropbox integration in the app, Newell said. Someone with a DSLR can check the app on their phone, see an assignment for the Golden Gate Bridge and grab their camera to shoot it. “The lines are blurring right now on mobile versus traditional,” Newell said. “So many of our devices can be used because it’s the most convenient camera that we have with us.” Snapwire’s not the first to head into the mobile realm of stock photography though — and it’s a tough market to break into. Other companies like Twenty20 also allow photographers to shoot more authentic photos and upload from their phones. Foap also specializes in the the camera phone to company, including a Foap Missions feature similar to Snapwire’s request function to help brands direct the photos they want. Most recently, Getty Images quietly released Moment, a similar app to Snapwire that lets mobile users directly upload to Getty and respond to requests. That means a very big dog just entered a very crowded fight for mobile photography talent.

Read More...
posted 6 days ago on gigaom
After months of delays, SpaceX‘s Falcon 9 rocket lifted off today carrying cargo bound for the International Space Station. The Dragon capsule inside the rocket, which will complete the final leg of the journey to the ISS, contains an array of important science experiments, including NASA’s OPALS project, which will test using a laser to transfer data between the space station and Earth. SpaceX will also deliver parts to repair a broken backup computer that is involved in the ISS’s robotics system. The launch was originally tentatively scheduled for September 2013, but was pushed by repeatedly by NASA due to limited docking opportunities and equipment issues on the ISS. SpaceX scrubbed a launch on April 14 after experiencing a helium leak. Today’s take off marks SpaceX’s third mission carrying cargo to the ISS for NASA. Screenshot from livestream of SpaceX’s successful launch Friday.

Read More...
posted 6 days ago on gigaom
This spring is shaping up to be a very contentious season for almost everyone that has a stake in the country’s wireless airwaves. The Federal Communications Commission on Friday released its recommendations for how the upcoming broadcast airwaves incentive auction should be conducted, which would impact mobile carriers, TV broadcasters and proponents of free-to-use unlicensed spectrum. The incentive auction will be the first of its kind, requiring an enormously complex process involving a reverse auction, a reconfiguration of the UHF TV band, and a forward auction of newly created 4G licenses – and there’s no guarantee that the FCC can pull it off.   Source: FCC The spectrum in question is in the 600 MHz UHF band, which now carries TV signals in markets all across the country, but mobile carriers have long had interest in the band, because its low frequencies would enable their LTE signals to propagate further, creating higher coverage networks. The key is for the FCC to convince hundreds of TV stations around the country to sell off their licenses and instead share channels with other stations, migrate to the VHF band or go off air entirely. If enough broadcasters participate, the FCC will get enough spectrum to hold a traditional spectrum auction for mobile carriers – if they’re willing to meet the broadcasters’ prices. The nuts and bolts The FCC’s report and order (R&O) – which will go before the full commission in its May 15 meeting – tries to maximize the chances that the incentive auction doesn’t flop by creating as much leeway in the process as possible. The R&O recommends splitting the airwaves into the small discrete chunks of 10 MHz (5 MHz for the uplink and 5 MHz for the downlink). Then it would split those licenses up geographically into partial economic areas (PEAs) that would allow operators to bid on them on market-by-market basis (as opposed to the big nationwide or regional licenses the FCC has auctioned in the past). Source: Shutterstock / Refat So, for instance, if many broadcasters in LA decided to part with their airwaves, the FCC could still auction off multiple 4G licenses in southern California even if broadcasters in New York decide not participate. And if only a few broadcasters in any given market are interested, the FCC could still pull out some usable 4G spectrum (it only takes two 6 MHz broadcast licenses to create a single 10 MHz mobile broadband license). The other big issue is how much new unlicensed spectrum would be created in the band. As opposed to licensed airwaves, which are controlled by a single carrier, unlicensed are open to anyone to use and form the backbone of Wi-Fi and Bluetooth communications. In lower bands like 600 MHz, those airwaves could be used for new longer-range white spaces broadband technologies. While unlicensed advocates want the FCC to dedicate as many as 24 MHz for free-to-use airwaves, the FCC’s proposal would set aside one specific band for unlicensed: Channel 37, which is used today for radio astronomy and medical telemetry. But the FCC would also allow unlicensed use in the “guard band” between TV and mobile broadband – think of it as DMZ where no cellular or broadcast signal can tread – and a section of airwaves called the duplex, which divides the uplink 4G signals from downlink signals. Source: Shutterstock / iconmonstr When repacking broadcasters’ 6 MHz channels into 10 MHz licenses, the FCC would add all of the leftover megahertz onto the guard and duplex bands, so the more broadcasters participate in the auction the more unlicensed airwaves will be created in a given market. FCC officials estimated that could be anywhere between 12 and 20 MHz. What’s at stake There’s a lot on the table, and there are lot of competing interests taking their seats, some of whom may elect to take their chips and leave. The FCC has to convince broadcasters — who tend to have a deep distrust for their regulator — that this auction is in their best interests. Meanwhile, the carriers are fighting amongst themselves about how the spoils will be split. The FCC’s R&O doesn’t even address one of the most controversial parts of the auction: whether AT&T and Verizon will face restrictions on how much spectrum they can bid on in any given market. Congress is calling for an unfettered auction where AT&T and Verizon have free rein, though lawmakers seem more concerned about boosting auction revenue for federal coffers than they are about competition. FCC Commissioners (L to R): Commissioner Ajit Pai, Commissioner Mignon Clyburn, Chairman Tom Wheeler, Commissioner Jessica Rosenworcel and Commissioner Michael O’Rielly (Source: FCC) FCC Chairman Tom Wheeler is backing plans that would limit those two megacarriers ability to bid in the auction to ensure smaller regional carriers, as well as Sprint and T-Mobile, will be able to pick up licenses. AT&T this week threatened to back out of the auction entirely if the final rules go against it. Whether AT&T is just posturing remains to be seen, but if it sticks to its threat it would take a major bidder out of the auction and increase the chances the FCC fails to meet its bidding revenue targets. The auction is still over a year away, scheduled for mid-2015, but one thing is for certain. Nailing down the rules of the auction is going to be a long controversial process, all the way up until the first bid is placed.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014What happened in mobile in the fourth-quarter 2013A look back at mobile in the third quarter

Read More...
posted 6 days ago on gigaom
In this week’s bitcoin review, we recap how MtGox and the hunt for bitcoin’s creator have managed to dominate headlines again. This week in Satoshi Nakamoto: Is it Nick Szabo? It’s a new month, so it’s time for a new bitcoin creator candidate and MtGox scandal. Up to bat for Satoshi Nakamoto this week is Nick Szabo. Szabo, a respected blogger, was identified as a potential creator of bitcon after a forensic linguistics study said his writing patterns had an “uncanny” likeness to that of the original bitcoin paper. The Aston University study matched a different, independent study from December that also identified Szabo as a possible author. Reaction on Twitter and Reddit has been mixed — while Szabo has been on the top of many people’s lists for some time, he’s also repeatedly denied it and people who know him are stepping forward to support his denial. While the study possibly matches Szabo to the paper, it doesn’t mean he’s the sole creator either. Another commonly held belief is that Satoshi Nakamoto might be a group of people under that pseudonym — if so, Szabo might just be a piece of that puzzle. Meanwhile, MtGox continued its struggles in court. Last week, it was rumored that the exchange’s CEO Mark Karpeles would be arrested if he attended a hearing about the exchange in the US. He obviously didn’t attend, but his legal team is pushing ahead in the U.S. courts and is trying to recover $5 million that had been seized by the Department of Homeland Security. In Japanese bankruptcy court, the MtGox team asked to switch to liquidation proceedings rather than restructuring, lessening the chance of recovering any of the missing money. The market this week Bitcoin didn’t stay below $400 for long. The price has climbed more than $100 this week, closing some days as high as $530. The market closed at $494.10 on Thursday, but had fallen to $478 at 9 a.m. PST. For background on why we’re using Coindesk’s Bitcoin Price Index, see note at bottom of the post.  Here are some of the best reads from around the web this week: Don’t hold your breath for bitcoin purchases on Amazon — the e-commerce titan said it doesn’t have any plans to “engage” in bitcoin commerce. If you’re bitcoin mining, it might help to have your dad’s power plant be the one to generate the electricity. It’s been a week for bitcoin documentaries. CNBC has a great one here while there’s also a more stylized documentary, The Rise and Rise of Bitcoin, circulating the film festival crowd. Dogecoin, meet Dogecon. The shiba inu- and comic sans-loving altcurrency is hosting a meet up next week in San Francisco. I’ll be attending, but City AM is right to question whether Dogecoin is starting to cannabalize itself. Bitcoin in 2014 The history of bitcoin’s price A note on our data: We use CoinDesk’s Bitcoin Price Index to obtain both a historical and current reflection of the Bitcoin market. The BPI is an average of the three Bitcoin exchanges which meet their criteria: Bitstamp, BTC-e and Bitfinex. To see the criteria for inclusion or for price updates by the minute, visit CoinDesk. Since the market never closes, the “closing price” as noted in the graphics is based on end of day Greenwich Mean Time (GMT) or British Summer Time (BST).  Feature image from Pond5/StevanoVicigor

Read More...
posted 6 days ago on gigaom
Distributed computing is nothing new, but like the Big Bang, what was once contained as a singular node of computing has exploded into an ever-expanding number of real and virtual machines traveling farther and farther from any central origin. It’s not a perfect metaphor. There was never just one mainframe or one data center, but the thinking is similar. The number of nodes is increasing and their placement on the network is moving further and further out. Which is why this year at Structure we’re pushing further and further into use cases and an understanding of how one builds computing that no one organization has control over. Can computing embrace entropy while still delivering reliable results? The event, held in San Francisco on June 18 and 19, attempts to discover how big names in webscale computing are thinking about the edge and designing applications that can span both the cloud and individual sensors. But while the Googles and Facebooks might be the leading edge, how far can companies like HP or VMware drag enterprise clients into the future, and what’s keeping them back? 1. An application that lives in every time zone Over the years Google has driven the technology behind distributed computing with technologies such as Map Reduce and Spanner. It is clearly thinking about how to build applications that aren’t isolated in one data center or even one time zone. This type of distributed thinking is behind its latest networking investments and is why Urs Hölzle, SVP Technical Infrastructure and Google Fellow, is speaking at the Structure. But we’re also bringing in others who understand these problems including Facebook’s Jay Parikh and Microsoft’s Scott Guthrie. Urs Hölzle 2. Building trust on untrusted hardware Securing more and more devices isn’t just hard, it’s becoming impossible as we span different clouds, data centers and networks. With security flaws like Heartbleed or weak physical endpoints such as the point of sales terminals that led to the Target data breach, we’re going to need a new model for security. Matthew Prince, the CEO of CloudFlare has a some ideas on how we implement security in this brave new world. Matthew Prince 3. Going to light speed It’s not enough to say that computing will need to occur over a greater number of devices. We also have to improve the speed that information travels over the myriad networks it will have to traverse to help with real-time data processing. We’ll also need new forms of memory capable of holding more data close to the compute and possibly containing its own processing. Speakers such as Andreas Bechtolsheim of Arista Networks, Dianne Bryant of Intel and Vinod Khosla can help us understand the hardware problems in these three areas and potential solutions in the works. Andy Bechtolsheim 4. Leave no computer or company behind This concept of an explosion of data and endpoints is nothing new to enterprise, which has dealt with this since mainframes evolved to the personal computer and now to every employee bringing his or her own device. But each evolution has led to more complexity and now the pressure is on to rethink the overall architecture to focus on agility. We’re bringing in Jamie Miller, the SVP and CIO of GE; Jeffery Padgett, Senior Director, Infrastructure Architecture for Gap; Don Whittington, VP and CIO of Florida Crystals; and Stephan Felisan, VP Engineering & Operations at Edmunds.com to explore how old-line companies will make this next evolution. Jamie Miller 5. Abstract everything you hold dear Part of the promise of this new style of computing architecture is that more people without deep technical skills can use technology to improve their business. But to make this possible, you have to make hard tech easy. Abstraction is how most companies are choosing to do this. We’ll have the granddaddy of abstraction, Amazon’s CTO Werner Vogels, onstage to discuss how far the world’s most popular public cloud can take that concept. Mike Curtis, the VP of Engineering at Airbnb, will also join in, discussing the practical limitations of such a strategy. Mike CurtisRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The Structure 50: The Top 50 Cloud InnovatorsIn Q3, Big Data Meant Big DollarsHow mobile will disrupt the living room in 2014

Read More...
posted 6 days ago on gigaom
On Friday, social network and recruiting site LinkedIn announced via blog post that it had officially surpassed 300 million members, with 100 million members based in the U.S. alone. Acquiring roughly 100 million users since January 2013, the company noted that much of the growth in traffic has come from two areas: internationally and on mobile. LinkedIn stressed that it was on the verge of its “mobile moment,” meaning that sometime in 2014, mobile use will actually surpass desktop traffic. Both feed directly into each other — apparently mobile accounts for more than 50 percent of LinkedIn’s international traffic already. But the company conveniently side-steps any stats related to Monthly Active Users, perhaps a sign that the company is still trying to make its platform a daily destination.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What today’s companies need to bridge the sales automation-to-CRM gapHow LinkedIn is evolving its media businessA look back at the third quarter of 2013

Read More...
posted 6 days ago on gigaom
At Nokia’s Here connected car division in Chicago, researchers are pouring over crowdsourced vehicle data from all of the world, trying to figure out how our future autonomous vehicles should comport themselves on the road. By comparing high-definition mapping data against the measured behavior of real vehicles, Here is determining the most optimal, safest and most fuel-efficient way for autonomous vehicles to drive on any given highway or through any feasible intersection. There’s only one problem. The optimal way to drive is not the way real humans actually drive. The traffic patterns on our highways and roads today would look very different if they were completely populated by autonomous cars. Those cars would space them themselves far more closely than all but the most aggressive tailgaters would feel comfortable with. They’d brake too soon when coming at curves, and then they’d accelerate through those curves at speeds fast enough to turn some people’s stomachs. Ogi Redzic, Nokia Here VP of Connected Driving (Source: Gigaom / Kevin Fitchard_ This presents a problem for Nokia and other companies developing driverless car technologies, Here VP of Connected Driving Ogi Redzic said. In these early days of autonomous driving, the auto industry has to take into account the foibles of human nature and the ingrained wisdom of the road when programming the driver logic of the first autonomous cars. Otherwise independently acting vehicles might create the congestion and cause the accidents they’re intended to avoid as human drivers react to their seemingly erratic behavior. “Autonomous cars have to drive similarly to how humans drive,” Redzic said. “We have to humanize autonomous driving for it to gain acceptance.” That doesn’t mean training driverless cars to weave within their lanes or flip off pokey drivers as they pass them on the shoulder. But it does mean programming some inefficient behavior into vehicles, getting them to match the typical patterns of human drivers as they, say, navigate a particularly sharp curve or position themselves between vehicles in traffic, Redzic said. Nokia Here’s depiction of traffic speed patterns in a European city (Source: Nokia) That won’t always be the case, Redzic added. Emulating human driver behavior will be key in emerging Advanced Driver Assistance Systems, which will take control of the wheel and peddles in emergency situations or give the car a slight nudge when it meanders outside of its lane, and in the early days of fully autonomous cars. But as more autonomous cars make it onto the road, Redzic believes humans will start adapting their behavior to the driverless cars, rather than the other way around. Redzic said he couldn’t predict an exact moment, but the day the number of autonomous cars match the number of human-controlled vehicles would be a good starting point. Ultimately human driver behavior will have to change. We tend to think of driverless vehicles as a convenience — putting your car on autopilot so we can check our email without careening into a school bus — but there’s a much bigger picture. Source: Shutterstock / TonyV3112 Autonomous and connected cars will be much more efficient cars. Vehicles with similar destinations will “platoon” on the highway, minimizing lane changes and easing congestion. Vehicles connecting to our transportation infrastructures will be able to route around accidents and make more efficient use of all the streets, roads and highways available. With governments reluctant to invest more money in transportation infrastructure and the number of vehicles on roads only increasing, a key mission of the autonomous driving will be to pack as many automobiles as possible onto our existing roads, moving them from their various point As to Points Bs in the most efficient manner possible while minimizing the fuel they consume and the greenhouse emissions they produce. The alternative is global gridlock.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What the global tablet market will look like by 2017How the app economy could reboot the EU economyA look back at this year’s CES and what it means for tech in 2014

Read More...
posted 6 days ago on gigaom
This week make sure you check out our latest research report on utilizing cloud computing, big data and crowdsourcing to stay ahead of the competition. Also, Gigaom’s Structure conference is happening on June 18 and 19 in San Francisco and registration is picking up, so be sure to register before it is too late. Now the jobs for this week: Amdocs: Development Group Leader (Atlanta) Synectics: Software Quality Assurance — Validation (Waukegan, Ill.) Microsoft: Principal Software Development Engineer in Test Lead ( Redmond, Wash.) Northrop Grumman: Sr. Cyber Cloud Systems Administrator (Annapolis Junction, Md.) Time: Director of Audience Ad Product Solutions (New York) We also have other listings from companies like Zappos.com, Booking.com, Raytheon and more. Click here to see what else is on our job board.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to utilize cloud computing, big data, and crowdsourcing for an agile enterpriseHow the HR department must change to support the 21st century workforceThe Near-Term Evolution of Social Commerce

Read More...
posted 6 days ago on gigaom
There is better than a good chance that while relaxing on a beach somewhere or sipping a martini in your favorite lounge you have heard music that makes raise your eyebrow and ask ”what kind of music is that?” That kind of eclectic sound — a beat blend of Asian, Middle Eastern, Reggae, Bossa Nova, dub, electronica and chillout — is something Thievery Corporation has pioneered. Thievery Corporation’s Eric Hilton and Rob Garza formed the group in 1996 and captured music fans’ imagination with the release of their 1996 debut Sounds from the Thievery Hi-Fi. They have released eight studio albums; the latest of them being Saudade, which hit the stores on April 1, 2014. (They have also released 18 compilation albums as well.) They also started a label, Eighteen Street Lounge Music (ESL) and have introduced many genre-bending acts such as Ursula 1000 and Nicola Conte. I have been listening to their music for almost two decades and recently I caught up with Rob Garza, who has moved to the Bay Area. The topic of our conversation was their new album, the Bossa Nova inspired Saudade, which is perhaps one of the more important releases of 2014. Our chat wasn’t long — about 30 minutes — but we covered a whole series of topics. Of various topics, his comments about Internet culture, streaming, Spotify and label economics were the ones that were most illuminating. Here is a highly edited version of our conversation. Om: Thank you for making time. The first question I wanted to ask you was about the creative process and the Internet — how it has changed and influenced folks like yourself. Rob Garza: Back when we started, the internet was no were near as large as it is now in terms of music. Now everybody is using it. Back then you would actually have to go to record stores. Music was one of the ways of traveling through time and distance. Whether you go back to 1977 in London, to the punk movement or mid ’60s in Brazil to listen to Bossa Nova. Music was a major form of that type of traveling and communication. Now it’s almost, people kind of take it for granted you can go and Google and find all the most influential Bossa Nova records, and kind of be an expert within a day or two, not really but you know what I’m saying. I think it’s changed everything, how we make music, how we listen to music. How we consume music, how we take pictures, how we write and communicate with each other. It’s a very different world. Do you think because of the friction to get information has gone done and out ability to get more information quickly has gone up, do you think that has given you a better ability, to understand newer music forms faster, or has it taken away that ability? What I mean by that, when I listened to the Rolling Stones the first time, it was really expensive to buy a record when I was a kid in India. I was emotionally and financially very vested in the record and spent a lot to time trying to understand that music by listening to it again and again. Over a period of time I developed an emotional bond with it. Now, I find it much more difficult to form a bond with and artist or a song, or album in that sense. It’s very interesting, how we value music these days. In some way’s music has lost a lot of its value, and the emotional bond that you would have with a record back in the day. You would put it on, you would read the liner notes, you would spend the afternoon with it. You maybe listen to it a couple times and you would try to understand this particular piece of art. Now what people do — I’m even guilty of it — you have every song that you’ve ever loved on your iPod or iPhone. I go through and I’ll listen to 30, 40 seconds of 30 different songs without getting to have, that emotional bond that I would if I actually put on a piece of vinyl, and just sit in a room and listen and connect with the whole experience of, say an album, which is kind of a foreign concept today because a lot of it is built on popularity on iTunes or Spotify, which songs are more popular by a particular artist. That kind of connection doesn’t really exist the way that it did back in the day. With this new record, I think that we wanted to kind of explore a form of music, that’s very inspirational to us and just really dive into it. We wanted to dive into it as a whole album, rather than just one or two songs on the B side of a record. Photo of Rob Garza (left) and Eric Hilton (right) by Andrzej Liguz One of the things which I found about this new album was that I had to listen to it at least 20 times before I actually started feeling it. I got so used to listening to tidbits of your songs, in a sense; one song somewhere, another one as part of somebody else’s playlist on Spotify, that I forgot how an album really sounded like. Most people don’t listen to records that way (any more) and that’s the reality. Let’s say I’m a person who’s never heard of Thievery Corporation and I hear a couple songs. What am I going to do? I’m probably going to go to iTunes, pick out probably the three or four most popular songs. Download one or even a couple of them and that’ll be my experience with Thievery Corporation. Very few people probably are going to go and buy the whole record and listen to the whole record back to front, front to back, the way that we used to. Do you think we can have an album experience in this culture of snacking, this culture of Spotify? Is there room for album listening? There is, but it’s in the minority. Most people just want to…did you use the word snack, snack on things? That’s a good way to put it. People are just snacking. “Oh, I want to try a little bit of this. Oh, I want to try a little bit of that.” The information is just moving so quick that, in a way it’s a little rebellious to kind of make a record that’s just a soft listening, beautiful record. Especially when we look at it, like last week it was number one on the iTunes electronic charts all week, and there’s nothing really electronic about this album, so I thought that was kind of funny. You have an incredible vantage point. You are an artist yourself, you work with other artists; you also have a record label. You are constantly on tour. Can you talk a little bit about impact of things like Spotify, iTunes and all the digitization of music? There’s a lot of people who don’t care much about Pandora and Spotify. Rob: It’s great that people can explore different artists, find music on Spotify, YouTube, things like that. At the same time, do I think that it’s sustainable for the music community? I don’t think so, because a lot of this money just goes back into the pockets of the tech companies. Before, it would go to major labels some things like that. I’m not defending major labels, but at least major labels would take some of that money, and invest it to find and develop new artists, and trying to give artists a career. That’s the one…for me kind of missing link in this whole equation is that, that money goes to Google Play or goes to iTunes or goes to Pandora or Spotify. The royalties are miniscule. Also, those companies don’t make it a habit to invest in new music, new art and new talent. It keeps a lot of resources from coming back into the community. If you look at something like Spotify many record labels are investors in the company. So from that standpoint the money is all going back into the labels. You can say the same for Beats Music, which is owned by the music industry insiders. So, if you were to tell, for instance, the Spotify CEO what he should do in order to make the life of artists better? The first thing to do is to be open to having a discussion to figure out what is, beneficial to everybody. What makes it win-win. What makes it more fair for people. It’s so difficult for artists today, to have a career unless you already have your, I hate to use the word “brand,” but unless you’re already an established artist, it’s more difficult than ever to make a career, or you’re able to live from making music. First, be open to discussing all of this and hearing what the artists have to say. If you were to ask them, to do just one thing that changes a lot for the artist, what would be that thing, in your opinion? The biggest thing people will say about Spotify is how minuscule the royalties are compared to when people were actually purchasing the music. It’s a totally different business model. You’re never going to put that genie back in the bottle, getting people to go and buy music. We live in a streaming world…trying to increase the royalties…I hear where they’re coming from in terms of trying to increase the volume. Then if you increase the volume, more artists will get paid. I’m not sure I totally have an answer to that [laughs] question. That’s the million dollar question. As a music lover, it used to be a lot of friction in buying your music. Internet for all its faults exposed me to a lot more music. A lot of your artists have become part of what I have acquired and I listen to often. Before that, one had to think twice before buying a CD. The internet has increased the size of your audience. There’s a lot more people who are aware of you, your group and your label worldwide, right? It’s interesting you bring that up. One of the things that has happened through that…It’s not so much the awareness that has triggered it, but we’ve basically, essentially shut down the record label ESL. We’re putting out Thievery records, but we’re not working with any more artists, because we’ve gotten to the situation where…Let’s put it this way. Back in the day, we knew any artists we signed, and put out the record, it would sell at least 5,000 copies. Right? You give artists an advance. There was some money to be made through selling CDs and through licensing, and touring. Now, a lot of these artists…I don’t know if you saw that thing with David Lowery, from Cracker and Camper Van Beethoven, where he talks about how, he had a million plays on either Spotify or Pandora, one of these streaming services. Basically, he earned less money than he would have made selling a t-shirt at one of his concerts. Those are the kind of economics we’re dealing with. When you run a small independent label, at a certain point, it becomes like trying to squeeze a dry lemon. It’s a lot of work, and you’re not getting a lot of juice. In one way, it’s allowed people to learn more, about these different artists that we have on our label. Even when we were dealing when it was just, iTunes was the only thing on the block, it was a lot more beneficial and sustainable for artists. Wow. I did not know that you had shut down, essentially, your record label, which is too bad, because you were the global sound, curator from my standpoint. Always had a lot, of interesting groups on your label. What a shame. Yeah. It’s tough too, because these are your friends. You’re coming up to them, and they’re, “What did we earn this last six months?” Here’s the $100. Here’s the numbers to show it. You do that enough times and you’re like, “I don’t really want to be in this part of the business, because it’s kind of depressing.” All photos courtesy of Thievery Corporation.

Read More...
posted 6 days ago on gigaom
John Meyer may be making really expensive loudspeakers, but when it comes to high-end audio, the audio engineering pioneer prefers free. FLAC, the open source audio format developed by Grateful Dead fans to trade bootleg recordings, is “the perfect format” for music aficionados looking for higher-resolution audio, Meyer told me during a recent interview. And to him, any company pushing trying to make a buck with selling upsampled music is just out to sell snake oil. “It’s tricking people who don’t know enough about technology,” he said. Ordinary music fans may never have heard of John Meyer, but chances are, he has helped them to enjoy music at one point or another. Speakers from Berkeley, California-based Meyer Sound, the company he co-founded with his wife in 1979, have powered tours from artists like Bob Dylan, Metallica, Herbie Hancock and Usher. They’re used for Cirque de Soleil shows, have helped address crowds of 800,000, power churches, concert venues, casinos and movie theaters around the world. A Meyer Sound system on tour with Bassnectar. Image: Meyer Sound. In professional audio engineering circles, Meyer is regarded as a pioneer, because he was one of the first to take the idea of linearity — meaning that the audio coming out of the speaker should sound exactly like the input, just more amplified — from studio monitors to concert venues and stadiums. He also was an early proponent of self-powered speakers, which are basically speakers that already contain the amplifier and all related electronics. Most recently, Meyer has made waves with acoustic systems that can shape the sound of a room through a combination of microphones and loudspeakers, helping churches to adapt to a wide variety of performances and keeping the noise level in high-end restaurants at bay. In other words, he knows a thing or two about audio. I recently got invited to visit the Meyer Sound production facility in Berkeley, where the company locally produces each and every part of their speakers in a slow process that ensures quality control from start to finish, and chatted a bit with Meyer about how technology has been changing his industry. Overall, Meyer was very optimistic about the impact of new technologies. But when I asked him how this shapes the way consumers get to experience sound, he struck a cautious note. “I’m worried that my generation has gotten too lost in the technology,” he told me. That’s because Meyer sees a move towards two extremes. One the one side is highly compressed sound, which Meyer called elevator music, only to add: “There is nothing wrong with elevator music. It just shouldn’t be the diet that everyone has.” One the other hand is a trend to ever higher bit rates that resembles the megapixel wars in the digital camera space, with companies trying to push digital music towards a resolution of 192 kHz, often combined with proprietary formats. Meyer Sound loudspeaker driver manufacturing in Berkeley, California. Image: Meyer Sound. Meyer said that it simply “doesn’t make sense” to go higher than 96 kHz / 24 bit, which already is an order of magnitude better than standard CD audio. He also lamented that companies are trying to sell upsampled music — songs that were recorded with lower bitrates and resolutions, but are then altered to offer the appearance of a higher-resolution. “Using 24/96 is not the answer unless it is recorded in 24/96,” Meyer quipped, adding that people are getting wiser about snake oil claims, thanks largely to internet forums. “You can’t win those fights anymore, you can’t bamboozle the public,” he said. Music companies and high-definition music vendors should instead embrace the open FLAC audio format, he suggested. “It’s well worked out, it’s geeky,” he said, adding that by his estimates, around 250,000 people are already downloading FLAC music files from the internet. He called on people in his industry to educate consumers about the value of something like FLAC. Instead, many would waste their time chasing new technologies. “I’m saying we should stop,” Meyer said.

Read More...
posted 6 days ago on gigaom
In 2012, Chet Kanojia set out to take on TV’s goliaths with a slingshot full of tiny antennas, but he never imagined things would go far so fast. Aereo, the start-up he created, is going before the Supreme Court on Tuesday to face off against ABC and the other big broadcasters in the most important TV case in decades. Sitting in Aereo’s office in New York’s Soho district in early April, Kanojia looked more weary than when I first met him a year ago, but he still burns with the same quiet charisma and passion for creative technology. During that time, Aereo’s service, which lets subscribers watch and record over-the-air TV for $8/month, has expanded from New York to 10 more cities. Recent media chatter has been about Aereo’s chances at the Supreme Court, but on this day our discussion was not about law. Instead, Kenojia shared some of his vision for TV, technology and the public airwaves. Here are some lightly-edited highlights from our conversation. On Netflix and why HBO will be spun-off in 5 years JJR: When this started you talked about a “smarter bundle” of channels. Has your view of the cable industry and TV evolved? Chet Kanojia: My view has gotten stronger that a break in the current system is inevitable, irrespective if Aereo is around or not. Look at the current prices.  Any industry compounding at 7% is just unsustainable. A change is inevitable. The right model will be if someone can put together broadcast access with either HBO or ESPN, then it’s game over. One of those pay channels has to go outside [the cable system] because if Netflix continues to do what they’re doing, then the virtuous cycle continues — they spend a little more on content to get more subscribers, and then they will start outspending all pay channels combined. The only way to compete with that will be for HBO or Showtime to go out of bundle. The key to understand is that it’s not just a pure financial issue, it’s also a data issue. So for example, Netflix can make those bets because it knows what those consumers are doing or not doing.  When you’re a bundled network like HBO you have no idea what the consumer does — and the cable guys have zero incentive to give you that information. I would say HBO will be spun out in 5 years. On the future of NFL and TV sports JJR: The NFL and the other sports leagues are supporting the big broadcasters, who have threatened to remove their shows from over-the-air TV and become cable channels if Aereo wins the case. Will that actually happen? Chet Kanojia: I don’t think they can do that. Just to put it in context, ESPN has Monday night football, and the performance is a fraction of what the broadcasters get. It’s not going to be economically viable for them to go that way. Frankly, if they’re going to go that way, why wouldn’t the leagues do their own direct paid relationships? MLB.com has been a template of where that’s going. I think a sports fan will be happy to pay $100 or $200 a year. The leagues don’t need a middle-man. If they don’t need a broadcast affiliate, they certainly don’t need a channel — they already have their own, like the NFL Network. The FCC and selling the public airwaves JJR: Let’s talk about the airwaves which, as you’ve pointed out, are a public good the broadcasters are using. What’s going to happen to all that spectrum in the future? Chet Kanojia: FCC Chairman Tom Wheeler is demonstrating that they don’t need [all that spectrum], you can do frequency sharing and you can do channel sharing or all sorts of other things. The FCC has shown there’s an opportunity — while protecting the broadcasters’ statutory rights, while ensuring they can still be on the air. The FCC’s being extremely generous offering this idea of an incentive auction to sell part of their spectrum, rather than using eminent domain. JJR: What do you think the airwaves should be used for instead? Chet Kanojia: To the extent they can stitch together blocks, it should be unlicensed wireless. I don’t understand why our country has this model where we grant companies access in perpetuity to spectrum, which is a finite and very constrained resource. If you look at anytime that there’s unlicensed use, it creates far more value. WiFi, Bluetooth, all of those things – the amount of overall investment and products that are made in that category far exceed. It should be unlicensed and, if there is a license it should be a term license. If the FCC truly wants competition, it will be about granting internet access. And the way to do that is to create a big unlicensed block. This isn’t a criticism of the FCC, but no matter what money they raise in the spectrum auction, it will be peanuts compared to the value of the unlicensed side. On Apple and home entertainment JJR: Who do you think is doing the best job in the internet TV and home entertainment space? Chet Kanojia: I love Netflix. The new interface is great, I just love the company. We’re all dying to see what happens with Apple. More and more I’m convinced that the idea of tablets, phones, computers all being projectors is ultimately the right model. And by projectors I don’t mean optical projectors but IP projectors. I don’t know where things are on the music side, but everywhere in my house there is AirPlay capability. I love Sonos but the problem is that the Apple iTunes sync is just manual and that sucked. Apple’s keeping them out I suspect. On innovation in a Comcast world JJR: Take us a year from now. If Aereo wins, what will the company look like? Chet Kanojia: We’ll be in 50 cities or so. We’ll start marketing effectively, and open our platform to new uses. We’ll provide our technology to small and medium sized cable guys — on the network and DVR and application side. There’s a weird dynamic in the marketplace now. There’s no company stepping up to provide equipment in the video business — mainly because the cable guys have killed all suppliers, so no investor is going to finance a new company. The world that we’re heading for is one where whatever Comcast builds is what you can have. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What the shift to the cloud means for the future EPGWhat the shift to the cloud means for the future EPGThe living room reinvented: trends, technologies and companies to watch

Read More...
posted 6 days ago on gigaom
Seeing Beyond Technology: Advanced Management Program for Digital Leaders A Continuing Education Course at the University of Southern California Spring Session: May 5-9, USC Campus, Los Angeles Register here DISRUPTION. INNOVATION. CONVERGENCE. Technology is evolving at an unprecedented rate, transforming business models and the user experience. Hosted by USC’s Institute for Communication Technology Management (CTM), the Advanced Management Program (AMP) is focused on managing and leading in the age of mobile, digital, social, big data and the cloud. Participants typically represent the communications, technology and entertainment sectors.  Course topics include: The connected, digital consumer and the emerging competitive landscape Business strategy and innovation Millennials as customers and employees Driving positive change through executive storytelling (For the course brochure, see: http://classic.marshall.usc.edu/assets/162/26169.pdf.) REGISTER NOW Enrollment in the Advanced Management Program is limited in order to provide maximum opportunity for interaction and teamwork. To register for the course now, go here. DISCOUNTED ADMISSION FOR GIGAOM SUBSCRIBERS Gigaom subscribers can receive a discount when they pay for the course. Simply enter the code GIGAOM when you register. Your fee will be reduced from $8,400 to $6,000. About us Founded in 1985 at the University of Southern California, CTM is the world‘s foremost institute at the intersection of technology and content. It unites a powerful network of industry leaders involved in every facet of the digital media value chain. For more on CTM, go to www.marshall.usc.edu/ctm.

Read More...
posted 6 days ago on gigaom
Between Amazon threatening to use drones to deliver packages, to etiquette questions about what to do when you find a fallen drone, the tech set and popular culture has become obsessed with the flying machines. So I asked my colleague Signe Brewster to come on the show to offer us a little intro to drones course. After all, many of them are connected. She shares a lot of information about what’s on the market today, what they are doing and how hard it is to learn how to fly them indoors. She also asks for help in connecting her venetian blinds. The second half of the show I talk to Jason Johnson, the CEO of August, a maker of a smart lock launched last June. The $199 lock was expected in December and then in late Spring, but now it’s been pushed out indefinitely. Johnson explains why it’s taken so long to ship. The short answer is that it’s hard to build a smart lock, but the long answer is worth hearing. You might learn something about building consumer products. Johnson ends with his thoughts on how the smart home will come together and avoids telling me product August will improve next. Host: Stacey Higginbotham Guests: Signe Brewster and Jason Johnson, CEO of August. It’s drones 101 time as we discuss what is on the market, what they can do I try to think of ways to make a drone work for the connected home. Maybe it could walk my dog? What makes a lock smart. Or a robot? Why hasn’t my darn smart lock shipped yet? Where is August going next with connected consumer devices? Internet of Things Show RSS Feed Subscribe to this show in iTunes Download This Episode PREVIOUS IoT PODCASTS: What would you do with $100M? We talk to Prodea about connecting the world Let’s get industrial data online, and moving the connected home Dude, where’s my car? Plus a tour through a Savant home Cooking with the internet of things and the coming wave of dumb “smart” devices Another take on wireless power and the cool IoT stuff at SXSW Will the smartphone eat the fitness tracker market? RunKeeper’s CEO says yes. Overclock your car and hack the Google Glass prescription limitations How do you bring the internet of things to the consumer? Two perspectives. The internet of things is a developer nightmare … and opportunity Podcast: Meet Skynet, an open source IM platform for the internet of things Supporting a connected home is about education, not troubleshooting Does your coffee machine need its own domain name? Google’s new Nest, and NinjaBlocks adds gestures to smart hubs Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A look back at this year’s CES and what it means for tech in 2014How the truly smart home could finally become a realityGigaOM Research highs and lows from CES 2013

Read More...
posted 6 days ago on gigaom
Uber announced today that it will start tacking on a $1 “safe rides fee” to every UberX ride. In a blog post, the company said the extra dollar would go towards security measures, like background checks and driver education. Both Uber and Lyft extended their insurance options to cover drivers better last month. The extra $1 won’t be too much of a cost burden though after UberX cut prices earlier this year.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A look back at the first quarter of 2014

Read More...
posted 6 days ago on gigaom
Would you like to try out Google Glass before spending $1500? Unless you know somebody with a pair, you’re out of luck. But a user on Reddit has posted images of the next best thing: a program where you can try on Google’s Glass-compatible frames and see whether they look good on your delicate face. Google has confirmed the trial program through its Google Plus page. In exchange for a $50 deposit, Reddit user clide received four pairs of Glass in the mail, each with a different titanium frame and different color Glass unit. All four pairs of Glass were nonfunctional. One photo shows a broken USB port, presumably so it could not be charged or plugged into a computer. Clide received his invite through an email update on the Explorer program. Unlike most Google initiatives, the ordering process did not take place online. He posted: To sign up I went through a series of phone calls where I had to give them information in small chunks. One call to say I was interested, another to give them the best time to call me, another to give them my address, and a final one to give them my billing info for a $50 hold on my card. We tried the phone number Clide posted with the photos earlier this afternoon, but were unable to reach a person. Other eyewear companies, like Warby Parker and Shuron, send multiple sample sizes to their customers free of charge, mainly because eyewear fit is a tricky, individual process. Although Google Glass does not have sizes, it does have four frame options and five colors to choose from. This program seems to be targeted at certain customers and continues Google’s pattern of distributing Glass in ways that are not available to the general public. On Tuesday, Google had a one-day Glass sale to expand its user base. Its product page currently shows the eyewear as out of stock.    Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Applying lean startup theory in large enterprisesA look back at this year’s CES and what it means for tech in 2014What happened in mobile in the fourth-quarter 2013

Read More...
posted 6 days ago on gigaom
If the launches of various new-media entities over the past year — from Beacon’s crowdfunding efforts and Syria Deeply’s topic-focused site to Ezra Klein’s Vox project and Jessica Lessin’s The Information — it’s that there is no end of experimentation going on when it comes to business models. But can a not very well-known blogger with no team behind them turn their writing into a successful freemium business? Technology analyst Ben Thompson is determined to try: he launched a new membership-based model on his blog Stratechery this week and I talked with him about what he is trying to do and why. Thompson is a former business development and marketing manager with Automattic, the company behind the WordPress blog platform, and has also worked for Microsoft in a similar capacity. Over the past year, he has developed a following for his long and thoughtful posts about technology companies such as Box and Apple, and the strategic thinking (or lack of it) behind their businesses — and it’s that following that he is now trying to monetize. Membership instead of just donations Instead of a simple donation-style paywall, similar to what Andrew Sullivan has done with his site The Daily Dish (which has raised close to $1 million over the past year), Thompson has a series of membership tiers that are designed to offer different levels of experience and content. So the version that is $5 a month or $50 a year includes the ability to comment and a full RSS feed, while $10 a month gives readers a T-shirt and access to daily lists of linked articles. The ultimate tier of membership, which is $30 a month or $300 a year, gives readers all the things they get on the other levels, but also adds a private messaging function through an app called Glassboard, as well as email access to Thompson and “virtual and in-person meetups” — and a book of the drawings that he does for some of his posts. Thompson says he thinks one of the reasons he will succeed where others haven’t is that he has a better business model: “Most of the ones that writers have set up have been terrible — they’re just leaky paywalls, and so they wind up being basically just donation-based. The thing I like about Andrew’s model is the focus on the individual… I think that’s right. But the business model basically devolves into a donation model.” Reward tiers instead of just a paywall By giving readers a series of rewards targeted to specific use cases — whether they are content-based or more community or interaction-based — Thompson said he hopes to get around some of the problems of paywalls. “The thing thing that bothers me about paywalls is that they punish your best readers, your biggest fans. I think freemium is a much better way to think about it…. the vast majority of people can consume it and never pay, but for those who really like what I have to say, they can pay and they get access to more.” Thompson said he is also a big believer in the single-voice blog, and he is concerned that some of the newer entrants in the new-media world — such as Nate Silver’s FiveThirtyEight site — have lost sight of what made them successful. Whereas every post and link that Silver used to publish had his voice and carried a certain brand expectation, Thompson said that identity is no longer as powerful because the site has broadened out into so many different topics. “You see all these sites coming out that are basically just recreating the old newspaper or magazine model. It used to be when I saw a 538 link I would click on every time, because I knew what to expect — but that’s been diluted now. There’s something really powerful about single-author sites that you don’t get anywhere else.” Less than a thousand true fans Thompson, who said he has been thinking about this project for years, said that much of his inspiration for Stratechery came from John Gruber’s Daring Fireball site, which is run more or less single-handedly by Gruber, and has become extremely successful with only a relatively small amount of advertising and sponsored content (Thompson points out that Gruber was one of the unsung pioneers of sponsored content in new media with his sponsored RSS feeds, which he introduced a number of years ago). While Gruber has a big enough following that he can survive solely on advertising and doesn’t need to offer memberships, Thompson said he is trying to balance his new venture out by using a number of different monetization approaches: one is membership, another is sponsored content (each post has a sponsor mention at the bottom), he is launching a podcast that will contain advertising, and is also accepting speaking engagements and may do other personal events. And while Kevin Kelly has written about the concept of “A thousand true fans” being all an independent artist needs to survive, Thompson said that based on his calculations about the combination of advertising — he says he is currently getting about 40,000 unique visitors a week — and memberships, he needs “significantly less” than a thousand subscribers in order to consider his site a success. Other sites that have taken a membership approach include Techdirt, which started as the personal blog of founder Mike Masnick and has become a business — with much of the value derived from the commenting community on the blog, which businesses can tap into for market intelligence. Techdirt’s membership layer includes things like early access to posts and the ability to take part in special forum discussions, as well as personal time with Masnick. Post and photo thumbnails courtesy of Thinkstock / Aquir and Flickr user Christian ScholtzRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Frenemy mine: The pros and cons of social partnerships for online media companiesNewNet Q1: Advertising, commerce and discovery dominateApplying lean startup theory in large enterprises

Read More...
posted 6 days ago on gigaom
Police carried out a raid on a home in Peoria, Illinois, in an attempt to unmask the person behind a Twitter account that lampooned the town’s mayor, according to a report in the Peoria Journal Star. A 27-year-old woman, who was one of three people in the house at the time of the raid, told the news outlet that the police seized every electronics item in the house, asked her about the Twitter account and told her an “internet crime” had taken place. Her boyfriend was reportedly charged with possession of marijuana. The account in question, @peoriamayor, showed a photo and official contact information of the town mayor, Jim Ardis, but published a stream of tweets about prostitutes, drinking and drugs that would be more befitting of the mayor of Toronto. The account has reportedly been suspended for several weeks. In response to phone inquiries, staff at the mayor’s office in Peoria confirmed that a city lawyer had prepared a search warrant, but declined to offer further details about the reasons for the raid, including if the search was related to marijuana or, as the electronics seizure suggests, to the Twitter account. The news report, if accurate, is troubling since there is no reason a police raid would be justified to seize a parody Twitter account  – the mayor’s office could of instead issued a subpoena to Twitter to identify the person behind the account, and pressed charges against them (though on what grounds it would be hard to state). It’s also unclear why a parody account that did not use the mayor’s name was suspended. A person at Twitter, who did not want to be identified, said that the company does not comment on individual account suspensions, but said that Twitter looks at the “whole picture” to decide if an account if someone is unfairly impersonating someone else. That’s why a clear parody such as the beloved @elbloombito account for New York’s former mayor Mike Bloomberg was acceptable, but a clear impersonation of the Peoria mayor might not be. Further details will likely be set out in the search warrant, which is not yet available online.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Data discovery tools and companies to watch in 2014How mobile will disrupt the living room in 2014Important notes for IT decision-makers from the fourth-quarter 2013

Read More...