posted 18 days ago on gigaom
This Is My Jam, a service that allows its users to share their current favorite song with anyone who might be interested in their musical preferences, has announced it will no longer accept new “jams” after early-to-mid September. The site debuted in 2011 as a small project inside The Echo Nest, a music intelligence company acquired by Spotify in March 2014. This Is My Jam was not part of that acquisition; it struck out on its own six months earlier, but as The Echo Nest is known for doing some pretty interesting things with music listening data, plenty of people kept there eyes on this company. This Is My Jam’s founders say in their announcement that they will make the service’s data available to both users and developers in a variety of formats, that they plan to open-source much of its underlying code, and that they will keep an archived version of the site online so anyone can view its history. The archive will consist of a little more than 2 million “jams” shared by the 200,000 users the site attracted in the four years it’s been running. That isn’t a lot by today’s standards, where something isn’t viral unless it’s used by a million people, but then, the site was kept relatively small almost by design. The startup was a direct response to the rise of automated services that claimed to know what people liked based on how often a song was played. As co-founder Matthew Ogle told Pando in 2012: “[E]specially in the face of Facebook taking scrobbling mainstream with Spotify, it really [feels] like everything [is] being reduced to ‘just listened,’ auto-generated hype charts, and bland Youtube links shared in feed after endless feed.” This Is My Jam was to serve as a counterpoint to those tools. It’s a compelling, romantic idea. Apple has taken a similar tack with its own services by having real, live human beings find the best news, music, and software instead of relying on the massive amounts of data available to it. Others have done the same thing — we’re living in highly-curated times. Yet most of those services rely on hiring people to curate something that others consume. This Is My Jam flipped that on its head by asking everyone to curate everything for themselves, and while a small number of people were enthusiastic in “jamming” something each week, many others were not. Ogle shared some information about This Is My Jam’s usage in an email to Gigaom: “Overall activity had declined a bit over the last year; most months somewhere between 5-10 percent of our users would pop in to post at least one jam,” he said, adding that the launch of new features “offset this somewhat.” That worked for a while. But in the face of a changing landscape defined by many companies vying for control of the music streaming market, the shift from desktop computers to smartphones, and increasingly complex licensing issues, This Is My Jam’s co-founders have decided it’s time to call it quits. “First and foremost, it feels like we’ve explored This Is My Jam’s original mission best we could,” Ogle and co-founder Hannah Donovan say in their announcement. “We’re ready to free up our evenings and weekends for new ideas and projects, while hopefully doing good by the thing that made Jam great: the 200,000 of you who shared more than two million hand-picked songs over the last four years, week after week.” When I asked how much it might cost to run This Is My Jam when it becomes a “read-only time capsule” next month, Ogle said that most of the service’s costs are associated with all the “moving parts” required to post new “jams.” Without those costs, he said, the archived service will be “very affordable.” As others have pointed out, This Is My Jam is winding down the right way: by making sure all the data shared by its 200,000 users is available to them, in some manner, for the foreseeable future. Now it’s up to other companies to show that music sharing is about more than play counts and YouTube videos. Oh, and in case anyone else is tempted to make a joke about This Is My Jam becoming “These Are Our Preserves” or something else, like I was, Ogle said in his email that a friend has already offered the site a new domain — thiswasmyjam.com — but we’ll have to wait and see if they end up using it. Despite impending shutdown, ‘This Is My Jam’ to preserve user data originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Comparing Amazon EC2’s C3 and C4 FamiliesWhat to know when choosing an EMM solution for your enterpriseSurvey: What the Enterprise Cloud Needs to Become Business-Critical

Read More...
posted 22 days ago on gigaom
Really, Apple? Now you want to be my wireless carrier? Are you sure that’s wise? I mean, sales of iPhone — great as they are — did not meet analyst expectations last quarter. iPad sales continue to disappoint. No one knows the actual Apple Watch sales numbers, but it seems as if nearly everyone thinks they are below original estimates. Even your biggest supporters have been mightily disappointed lately in Apple Music, the once-ubiquitous iTunes, and your software. Given that $90 billion, yes, billion, got whacked from your market cap soon after announcing last quarter’s results, perhaps now is not the best time to tackle an entirely new line of business, especially one as messy and price-driven as wireless service. Earlier this week, rumors ran hot that Apple was in active talks to offer a “MVNO” in the US and Europe. A traditional MVNO, or mobile virtual network operator, is a company that provides mobile service — 4G, for example — but doesn’t own the actual infrastructure. Instead, the MVNO leases capacity from an actual carrier, an AT&T or Verizon, then sells this service directly to customers. The opportunity Companies like Disney and ESPN have tried and failed to launch a successful MVNO. The primary problem is that no matter what clever services you offer — ad-free radio streaming that doesn’t count against your data usage, or clips of last night’s top plays — the MVNO’s costs are inevitably higher than the actual carrier’s cost, those who own the equipment. Apple no doubt believes it has a workaround. Unlike previous MVNO efforts, Apple makes its own smartphone. This opens up numerous untapped opportunities. For example, Apple might sell a monthly plan that includes the latest iPhone, voice and data service, free Beats 1 streaming, unlimited FaceTime calls, perhaps even a Discover Card-like discount on purchases when using Apple Pay. It’s a tantalizing idea. One bill, one service provider, everything you do on your iPhone all cleanly managed by Apple. Customers would no doubt love this. There’s still another Apple advantage. Apple already uses a SIM card in select iPads that lets the customer choose from a variety of monthly plans or pay-as-you-go options. This current iteration is a bit crude but in theory, Apple could use it’s super-popular iPhone to force multiple carriers to compete on price, offering iPhone customers the very best price across a range of carriers. The denial Will it happen? Uncharacteristically for Apple, the company quickly and very publicly shot down the rumor: “We have not discussed nor do we have any plans to launch an MVNO.” I have problems with this denial. Firstly, “plans to launch” leaves enormous wiggle room. At present, I do not have dinner plans for tomorrow, though undoubtedly it will happen. Secondly, Apple has been keenly interested in being an MVNO since before iPhone — back when the failed Motorola Rokr was the only ‘iTunes Phone’ on the market. That original Apple patent, which the company asked to extend in 2011, was created by Tony Fadell, the former Senior VP at Apple who went on to create Nest, the home automation hardware company now owned by Google. As the original Fadell patent makes clear, Apple’s proposed MVNO wouldn’t simply lease capacity from a single network, but pit carriers against one another. Bids are received from multiple network operators for rates at which communication services using each network operator can be obtained. Preferences among the network operators are identified using the received bids, and the preferences are used to select the network operator for the mobile device to use in conducting communications. There’s still another clue that Apple is interested in the MVNO opportunity, despite the denial. Just last month, the Financial Times reported that both Apple and Samsung were in “advanced talks” with GSMA, a global telecom industry consortium, on an embedded SIM (eSim) card that would let mobile phone users switch from one carrier to another on the fly. Right now, the traditional SIM card locks the user’s phone to a particular network, so the potential for the eSim, planned to launch in 2016, is huge. Maybe Sprint has excess capacity in Los Angeles and you choose Sprint. Then you travel to Silicon Valley, where T-Mobile offers the best price. In theory, Apple could have software to automate all this for you, choosing the best option based on price, time and place. For those who travel from country to country, this could be a godsend. With an MVNO, Apple controls the all the important pieces, the smartphone, the customer relationship, and has reduced carriers to little more than dumb pipes. The problem This is almost certainly doomed to fail. The iPhone is the primary driver of Apple profits. Competing directly against the very carriers who market, sell and support these devices seems to stretch the bounds of corporate hubris. Plus, the carriers control an extensive retail footprint. Apple has 265 Apple Stores in the US, but there are over 4,000 Verizon and AT&T retail outlets. Add in Sprint, T-Mobile and others, and an Apple MVNO could lead to each of these carriers limiting their support of iPhone. It’s an unnecessary risk. As Apple blogger John Gruber noted: Apple is a partner with all the carriers around the world that support iPhone. They can’t compete against them while partnering with them. There’s also the potentially devastating hit to Apple’s good name. Probably the most frustrating failure of iPhone, of any smartphone, with the possible exception of battery life, is a failed connection. Dropped WiFi and spotty cell coverage can be rage-inducing. We rightly blame such failures on the carrier. With an Apple MVNO, Apple itself becomes the full target for our rage. Given that Apple’s brand is the most valued in the world, running their own MVNO seems foolhardy. Proceed with caution Apple is famous for seeking to own all the core pieces that directly contribute to the customer experience. Cellular service is a core aspect of that, no doubt. Meaning, no matter what an anonymous Apple spokesperson says, this rumor is unlikely to die. This is doubly so now that Google has stated it will be launching an MVNO-like service. Apple should let the urge pass. Becoming an MVNO is simply too much risk for too little reward. Apple’s time can be better spent on fixing the problems it already has, not on adding new ones into the mix. An Apple wireless carrier? Not a wise move. originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Apple’s successes and Sprint’s failures defined the third quarter in mobileWhat to know when choosing an EMM solution for your enterpriseHow work media tools are shaping business in 2015

Read More...
posted 22 days ago on gigaom
After today, critics of news sharing site Reddit will have a more difficult time claiming the company is invested in the business of hate speech, at least when it comes to a handful of large pro-racist communities on the site. Not only did Reddit release its long anticipated content policy that outlines what it considers prohibited content and unacceptable behavior, but it also banned a slew of subreddits (aka communities) devoted to pro-racist ideas and discussion. Among the subreddits banned are /r/CoonTown, /r/bestofcoontown, /r/koontown, /r/CoonTownMods, /r/CoonTownMeta., and a handful of others like it. “Today… we are banning a handful of communities that exist solely to annoy other redditors, prevent us from improving Reddit, and generally make Reddit worse for everyone else,” wrote new Reddit CEO and cofounder Steve Huffman. “Our most important policy over the last ten years has been to allow just about anything so long as it does not prevent others from enjoying Reddit for what it is: the best place online to have truly authentic conversations. I believe these policies strike the right balance.” While some users demanded a full list of banned communities, Huffman attempted to explain the process for making sure these toxic subreddits stay dead. “When something gets banned the mods often attempt to recreate the same communities, which we try and stay on top of, so it’s an ongoing process today,” Huffman commented. That decision does make sense, and obviously killing any racist copycat subreddits before they pick up enough steam from former subscribers would also be logical. However, that doesn’t explain why Reddit wouldn’t also ban moderators responsible for creating such communities after the fact. The move to ban subreddits like /r/CoonTown likely had something to do with one Reddit user earlier this week pointing out that Reddit had become one of the largest communities for white supremacy groups on the web, and later asked Huffman directly how he felt about it. “Horrible, actually, but I don’t think you can win an argument by simply silencing the opposition,” he wrote. Contrary to that response, it seems that Huffman no longer considers the discussion and submission of racist content to exist on Reddit now that these subreddits have been banned. (Although you could always argue that users are still able to continue sharing their views within other subreddits.) One strong justification for allowing racist or hateful communities to exist on Reddit would be because the sum of all communities does theoretically expose those with racial beliefs to opposing opinions that may challenge their views. Scrubbing those communities from Reddit certainly wouldn’t eliminate a dialog of hate speech for people who are unapologetically racist either, as it would still exist elsewhere. For now, it seems that Reddit is perfectly fine with that, although Reddit community as a whole is still pretty divided. “Apparently only certain types of bigotry and brigading aren’t tolerated here. I wouldn’t have much problem with seeing /r/coontown go if your hate speech policy were actually fairly enacted, but this picking and choosing is the reason why many people were opposed to the hate speech policy to begin with,” wrote Reddit user Number357. “The problem with banning hate speech is that not everybody agrees on what hate speech is…” Others immediately started listing off equally racist communities that — according to Reddit’s new content policy — now violate the “spirit of Reddit.” As part of the new content policy rollout, Reddit has also begun to implement it’s new procedure for treating toxic subreddits differently by placing them into quarantine. Subreddits placed under quarantine will prompt users to acknowledge that the community doesn’t uphold the ideals of Reddit, but doesn’t explicitly violate the rules to merit getting banned all together. That said, I’m almost positive Reddit users will stay vocal about this distinction for the foreseeable future. Reddit finally begins shutting down racist communities originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Content monetization: News licensing and syndication still need marketplaces and infrastructureComparing Amazon EC2’s C3 and C4 FamiliesWhat to know when choosing an EMM solution for your enterprise

Read More...
posted 29 days ago on gigaom
Reddit will soon start treating some of its more controversial communities differently than others, according to newly minted Reddit CEO and co-founder Steve Huffman. Huffman took to the site today to share some updates that will be rolling out over the next few weeks, including a plan to “quarantine” subreddits (aka communities operating within Reddit) that do not comply with the company’s new content policy. These changes are somewhat necessary as major advertisers likely won’t be interested in doing business with Reddit until it puts some distance between those toxic communities that participate in illegal behavior or harassing strangers due to their appearance, race, sexuality, etc. It’s also deplorable to allow such harassing activity to continue if Reddit is to remain a healthy forum for discussion. The move is the latest to support the company’s new mission to limit harassment on the site. Last month Reddit took heat from some users after it banned the subreddit “Fat People Shame,” “Tales of Fat Hate,” and many other copycats, which essentially shamed fat people in a public forum. It’s hardly the only toxic community Reddit harbors, but it seems like the company is finally coming to terms with how to deal with them moving forward. “You’ll need to explicitly opt-in [to quarantined subreddits]. There will be a handful of restrictions, but it’s still in flux, so we’ll share when it’s nearly complete,” Huffman wrote, adding that this won’t be a black and white process when determining which communities get placed under a quarantine. “We’ll need to handle on a case-by-case basis. The purpose of this technique is to give us a way to contain and distance ourselves from communities that we would rather not exist but aren’t overtly violating any of our stated rules.” Reddit said it also plans to limit user harassment from private messages by adding an option for Reddit users to report offensive or harassing behavior to the site’s administrators. Considering that Reddit has an average of 3.7 million logged in users per day, this seems like a rather cumbersome task to pull off, but it’s certainly a step in the right direction. But that isn’t the only thing Reddit wants to do. It’s also revising how it handles banning users, which was previously done discretely — allowing a banned user to continue using the site but hiding most of their activity (called shadow banned). In the near future, Huffman said that process will change: “A straight-up, ‘you are banned because of X’ is the first thing we need.” I’ve reached out to Reddit with a few followup questions about how the quarantine process will work, and will update this post with anything new. Reddit plans to ‘quarantine’ toxic communities, boost transparency originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Content monetization: News licensing and syndication still need marketplaces and infrastructureComparing Amazon EC2’s C3 and C4 FamiliesWhat to know when choosing an EMM solution for your enterprise

Read More...
posted about 1 month ago on gigaom
Few people in the tech world can truly be said to “need no introduction.” Stephen Wolfram is certainly one of them. But while he may not need one, the breadth and magnitude of his accomplishments over the past four decades invite a brief review: Stephen Wolfram is a distinguished scientist, technologist and entrepreneur. He has devoted his career to the development and application of computational thinking. His Mathematica software system, launched in 1988, has been central to technical research and education for more than a generation. His work on basic science—summarized in his bestselling book A New Kind of Science—has defined a major new intellectual direction, with applications across the sciences, technology, and the arts. In 2009 Wolfram built on his earlier work to launch Wolfram|Alpha to make as much of the world’s knowledge as possible computable—and accessible on the web and in intelligent assistants like Apple’s Siri. In 2014, as a culmination of more than 30 years of work, Wolfram began to roll out the Wolfram Language, which dramatically raises the level of automation and built-in knowledge available in a programming language, and makes possible a new generation of readily deployed computational applications. Stephen Wolfram has been the CEO of Wolfram Research since its founding in 1987. He was educated at Eton, Oxford, and Caltech, receiving his PhD in theoretical physics at the age of 20.   Publisher’s Note: The following interview was conducted on June 27, 2015.  Although it is lengthy, weighing in at over 10,000 words, it is published here in its entirety with only very minor edits for clarity. Byron Reese: So when do you first remember hearing the term “artificial intelligence”? Stephen Wolfram: That is a good question. I don’t have any idea. When I was a kid, in the 1960s in England, I think there was a prevailing assumption that it wouldn’t be long before there were automatic brains of some kind, and I certainly had books about the future at that time, and I’m sure that they contained things about them, how there would be some electronic brains, and so on. Whether they used the term “artificial intelligence,” I’m not quite sure. Good question. I don’t know. Would you agree that AI, up there with space travel, has kind of always been the thing of tomorrow and hasn’t advanced at the rate we thought they would? Oh, yes. But there’s a very definite history. People assumed, when computers were first coming around, that pretty soon, we’d automate what brains do just like we’ve automated what arms and legs do, and so on. Nobody had any real intuition for how hard that might be. It turned out, for reasons that people simply didn’t understand in the ’40s, and ’50s, and ’60s, that lots of aspects of it were quite hard, and also, the specific problem of reproducing what human brains choose to do may not be the right problem. Just like if you want to build a transportation system, having it based on legs is not the best engineering solution. There was an assumption that we can automate brains just like you can automate mechanical kinds of things, and it’s only a matter of time, and in the early ’60s, it seemed like it would be a short time, but that turned out not to be true, at least for some things. What is the state of the technology? Have we built something as smart as a bird, for instance? Well, what does it mean to make something that is as smart as X? In the history of artificial intelligence, there’s been a continuing set of tests that people have come up with. If you can do X, then we’ll know you’re as smart as humans, or something like that. Almost every X that’s been defined so far, machines have ended up being able to do, though the methods that they use to do it are usually utterly different from the ones that seem to be involved with humans. So the types of things that machines find easy are very different from those kinds of things that people find easy. I think it’s also the case that a lot of things people say, “Gosh, we should automate this,” the mode of automation ends up being different from just sort of the way that you would—sort of if you had a brain in a box, the way that you would use that. Probably a core question about AI is, “How do you get all of intelligence?” For that to be a meaningful question, one has to define what one means by “intelligence.” This, I think, gets us into some bigger kinds of questions. Let’s dive into those questions. But first, one last “groundwork” question: Do you think we’re at a point with AI where we know what to do, and it’s just that we’re waiting on the hardware again? Or do we have plenty of hardware, and are we still kind of just figuring out how to do it? Well, it depends what “it” is. Let’s talk a little bit more systematically about this notion of artificial intelligence, and what we have, what we could have, and so on. I suppose artificial intelligence is kind of a—it’s just words, but what do we think those words mean? It’s about automating the intellectual activities that humans do. The story of technology has been a long one of automating things that humans do; technology tends to be about picking a task where we understand what the objective is because humans are already doing it, and then we make it possible to do that in an automatic way using technology. So there’s a whole class of tasks that seem to be associated with what brains and intelligence and so on deal with, which we can also think of automating in that way. Now, if we say, “Well, what would it take? How would I know if this box that’s sitting on my desk was intelligent?” I think this is a slightly poorly defined question because we don’t really have an abstract definition of intelligence, because we actually only have one example of intelligence that we definitively think of as such, which is humans and human intelligence. It’s an analogous situation to defining life, for example. Where we have only one example of that, which is life on Earth, and all the life on Earth is connected in a very historical way—it all has the same RNA and cell membranes, and who knows what else—and if we ask ourselves this sort of abstract question, “How would we recognize abstract life that doesn’t happen to share the same history as all the particular kinds of life on Earth?” That’s a hard question. I remember, when I was a kid, the first spacecraft landed on Mars, and they were kind of like, “How do we tell if there’s life here?” And they would do things like scoop the soil up, and feed it sugar, and see whether it produced carbon dioxide, which is something that is unquestionably much more specific than asking the general question, “Is there life there?” And I think what one realizes in the end is that these abstract definitions of life—it self-reproduces, it does weird thermodynamic things—none of them really define a convincing boundary around this concept of life, and I think the same is true of intelligence. There isn’t really a bright-line boundary around things which are the general category of intelligence, as opposed to specific human-like intelligence. And I guess, in my own science adventures, I gradually came to understand that, in a sense, sort of, it’s all just computation. That you can have a brain that we identify, okay, that’s an example of intelligence. You have a system that we don’t think of as being intelligent as such; it just does complicated computation. One of the questions is, “Is there a way to distinguish just doing complicated computation from being genuinely intelligent?” It’s kind of the old saying, “The weather has a mind of its own.” That’s sort of a question of, “Is that just pure, primitive animism, or is there, in fact, at some level some science to that?” Because the computations that are going on in the fluid dynamics of the weather are really not that different from the kinds of computations that are going on in brains. And I think one of the big conclusions that came out of lots of basic science that I did is that, really, there isn’t a distinction between the intelligent and the merely computational, so to speak. In fact, that observation is what got me launched on doing practical things like building Wolfram|Alpha, because I had thought for decades, “Wouldn’t it be great to have some general system that would take knowledge, make it computational, make it so that if there was a question that could in principle be answered on the basis of knowledge that our civilization has accumulated, we could, in practice, do it automatically.” But I kind of thought the only way to get to that end result would be to build a sort of brain-like thing and have it work kind of the same—I didn’t know how—as humans brains work. And what I realized from the science that I did it was that just doesn’t make sense. That’s sort of a fool’s errand to try to do, because actually, it’s all just computation in the end, and you don’t have to go through this sort of intermediate route of building a human-like, brain-like thing in order to achieve computational knowledge, so to speak. Then the thing that I found interesting is there are tasks that. … So, if we look at the history of AI, there were all these places where people said, “Well, when computers can do calculus, we’ll know they’re intelligent, or when computers can do some kind of planning task, we’ll know they’re intelligent.” This, that, and the other. There’s a series of these kinds of tests for intelligence. And as we all know, in practice, the whole sequence of these things has been passed by computers, but typically, the computers solve those problems in ways that are really different from brains. One way I like to think about it is when Wolfram|Alpha is trying to solve a physics problem, for example. You might say, “Well, maybe it can solve it in a brain-like way, just like people did in the Middle Ages, where it was a natural philosophy, where you would reason about how things should work in the world, and what would happen if you pushed this lever and did that, and [see] things had a propensity to do this and that.” And it would be all a matter of human-like reasoning. But in fact, the way we would solve a problem like that is to just turn it into something that uses the last 300 years of science development, turn it into a bunch of mathematical equations, and then just industrially solve those equations and get the answer, kind of doing an end run around all of that human-like, thinking-like, intelligence-like stuff. But still, one of the things that’s happened recently is there are these tasks that have been kind of holdouts, things where they’re really easy for humans, but they’ve seemed to be really hard for computers. A typical example of that is visual object recognition. Is this thing an elephant or a bus? That’s been a type of question that’s been hard for computers to answer. The thing that’s interesting about that is, we can now do that. We have this website, imageidentify.com, that does a quite respectable, not-obviously-horribly-below-human job of saying, “What is this picture of?” And what to me is interesting, and an interesting episode in the history of science, is the methods that it’s using are fundamentally 50 years old. Back in the early 1940s, people were talking about, “Oh, brains are kind of electrical, and they’ve got [things] like wires, and they’ve got like computer-like things,” and McCulloch and Pitts came up with the whole neural network idea, and there was kind of the notion that the brain is an electrical machine, and we should be able to train it by showing it examples of things, and so on. I worked on this stuff around 1980, and I played around with all kinds of neural networks and tried to see what kinds of behaviors they could produce and tried to see how you would have neural networks be sort of trained, or create attractors that would be appropriate for recognizing different kinds of things. And really, I couldn’t get them to do anything terribly interesting. There was a fair amount of interest around that time in neural networks, but basically, the field—well, it had a few successes, like optical character recognition stuff, where you’re distinguishing 26 characters, and so on. It had a few successes there, but it didn’t succeed in doing some of the more impressive human-like kinds of things, until very recently. Recently, computers, and GPUs, and all that kind of thing became fast enough that, really—there are a bunch of engineering tricks that have been invented, and they’re very clever, and very nice, and very impressive, but fundamentally, the approach is 50 years old, of being able to just take one of these neural network–like systems, and just show it a whole bunch of examples and have it gradually learn distinctions between examples, and get to the point where it can, for example, recognize different kinds of objects and images. And by the way, when you say “neural networks,” you say, “Well, isn’t that an example of why biology has been wonderful, and we’re merely following on the coattails of biology?” Well, biology certainly gave us a big clue, but the fact is that the actual things we use in practice aren’t particularly neural-like. They’re basically just compositions of functions. You can think of them as just compositions of functions that have certain properties, and the one thing that they do have is an ability to incrementally adjust, that allows one to do some kind of incremental learning process. The fact that they get called neural networks is because it historically was inspired by how brains work, but there’s nothing really neurological about it. It’s just some kind of, essentially, composition of simple programs that just happens to have certain features that allow it to be taught by example, so to speak. Anyway, this has been a recent thing that for me is one of the last major things where it’s looked like, “Oh, gosh! The brain has some magic thing that computers don’t have.” We can go through all kinds of different things about creativity, about language, about this and that and the other, and I think we can put a checkmark against essentially all of them at this point as, yes, that component is automatable. Now, I think it’s an interesting thing that I’ve been slowly realizing recently. It’s kind of a hierarchy of different kinds of what one might call “intelligent activity.” The zero-th level of the hierarchy, if we take the human example, is reflexive-type stuff, stuff that every human is physiologically wired to do, and it’s just part of the hardware, so to speak. The first level is stuff where we have a plain brain, so to speak, and upon being actually exposed to the world, that plain brain learns certain kinds of things, like physiologic recognition. But that has to be done separately for every generation of the species. It’s not something where the parent can pass to the child the knowledge of how to do physiologic recognition, at least not in the way that it’s directly wired into the brain. Then the second level, the level that we as a species have achieved, and doesn’t look like any other species has achieved, is being able to use language and so on to pass knowledge down from generation to generation, which allows us to build up this thing that goes beyond pure one-brain intelligence, so to speak, and make something which is a collective, progressively growing achievement, which is that corpus of human knowledge. And the thing that I’ve been interested in is that idea that there is language and knowledge, and that we can create it as a long-term artifact, so what’s the next step beyond that? What I realized is that I think a bunch of things that I’ve been interested in for many decades now is—it’s slowly coming into focus for me that this is actually really the thing that one should view as the next step in this progression. So we have computer languages, but computer languages tend not to be set up to codify knowledge in the kind of way that our civilization has codified knowledge. They tend to be set up to say, “Okay, you’re going to do these operations. Let’s start from the very basic primitives of the computer language, and just do what we’re going to do.” What I’ve been interested in is building up what I call “knowledge-based language,” and this Wolfram Language thing that I’ve basically been working on for 30 years now is kind of the culmination of that effort. The point of such a language is that one’s starting from this whole corpus of knowledge that’s been built up by our civilization, and then one’s providing something which allows one to systematically build from that. One of the problems with the existing corpus of knowledge that our civilization has accumulated is that we don’t get to do knowledge transplants from brain to brain. The only way we get to communicate knowledge from brain to brain is turn it into something like language, and then reabsorb it in another brain and have that next brain go through and understand it afresh, so to speak. The great thing about computer language is that you can just pick up that piece of language and run it again and build on top of it. Knowledge usually is not immediately runnable in brains. The next brain down the line, so to speak, or of the next generation or something, has to independently absorb the knowledge before it can make use of it. And so I think one of the things that’s pretty interesting is that we are to the point where when we build up knowledge in our civilization, if it’s encoded in this kind of computable form, this sort of standardized encoding of knowledge, we can just take it and expect to run it, and expect to build on it, without having to go through this rather biological process of reabsorbing the knowledge in the next generation and so on. I’ve been slowly trying to understand the consequences of that. It’s a little bit beyond what people usually think of as just AI, because AI is about replicating what individual human brains do rather than this thing that is more like replicating, in some more automated way, the knowledge of our civilization. So in a sense, AI is about reproducing level one, which is what individual brains can learn and do, rather than reproducing and automating level two, which is what the whole civilization knows about. Go to page 2 (of 3) on Gigaom . Interview with Stephen Wolfram on AI and the future originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Comparing Amazon EC2’s C3 and C4 FamiliesWhat to know when choosing an EMM solution for your enterpriseSurvey: What the Enterprise Cloud Needs to Become Business-Critical

Read More...
posted about 1 month ago on gigaom
When Gigaom was purchased after shutting down in March, the new owner, Knowingly founder Byron Reese, did so with the intention of restoring the tech news site to its prior glory. To do that entails building a team of editorial staffers to cover startups and technology business news in a fair, honest, and insightful manner on a consistent basis. The new Gigaom will get there, but it won’t happen overnight. Gigaom is a tech publication that was well known for its insightful staff and stories. That was the publication’s biggest strength, and to get back there again will take some time. To clarify, this publication is still very much interested in delivering the news — that much isn’t changing. But for now, we’re not focused on driving page views, social shares, or growing other traffic metrics. Just good journalism and insightful commentary. But let me back up a second to give a proper introduction. For the last half decade, I’ve covered tech news as a reporter and spent much time with the startup community. I’d always been a huge fan of Gigaom and I was shocked when the publication went dark. I, along with many others, did not want to see an archive of 200,000 Gigaom articles published during the last 10 years by some of the best writers in the business disappear. As of Monday, I’m starting as an editorial adviser to Gigaom where I’ll be helping the Knowingly crew plot a strategy that will put Gigaom back on the map. When I’m not advising, I’ll be writing reported pieces based on the hottest topics in the current news cycle because tech reporting is my first love and likely something I’d be doing on my own time anyway. I’ll also be joined by some freelance writers, and a few regular columnists. On top of that, our owner Byron–a published author and founder of many businesses — will also be contributing with interviews of notable folks in the world of technology. While the site will finally start publishing articles again, I’d like to emphasize that this isn’t a full relaunch. To do that, again, requires a full staff. What’s happening now is that we’re beginning to build towards that. We look forward to your feedback and to eventually bringing you a fully revamped Gigaom experience. Turning the lights back on originally published by Gigaom, © copyright 2015. Continue reading… Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to harness the power of crowdsourcingComparing Amazon EC2’s C3 and C4 FamiliesWhat to know when choosing an EMM solution for your enterprise

Read More...
posted 6 months ago on gigaom
*** UPDATE AS OF MAY 26 *** The information below is no longer current. See this post for a current update. A brief note on our company Gigaom recently became unable to pay its creditors in full at this time. As a result, the company is […] About Gigaom originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Unfortunate timing means I’ll miss a huge chunk of this year’s South by Southwest Interactive festival happening in my hometown of Austin, Texas this year. I’m both saddened a bit relieved since it is my 14th year attending the festival, and it will be a nice […] Fear, food and the internet of things at SXSW originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
After attending the “Spring Forward” event to get all of the remaining Apple Watch details and then getting some hands-on time with the device, I walked away with mixed emotions. On the one hand, debuted a polished product that was very responsive in my use. On […] Apple’s take on the smartwatch: Elegant evolution originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Apple only gave a fleeting demo of how contactless payments would work on its new Apple Watch at its Spring Forward event on Monday, but it was an impressive one. You select a card from Passbook in the watch interface and then tap the wearable device […] Apple Pay on the wrist: How Apple’s watch gets around the ID problem originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
HBO Now is almost here: HBO officially announced plans to launch its online-only streaming service dubbed HBO Now during Apple’s spring event Monday, and promptly managed to confuse everyone with an exclusive that isn’t quite exclusive and a price that’s not set in stone. Time to […] All you need to know about HBO’s new HBO Now streaming service originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
It looks like Twitter bought livestreaming app Periscope weeks ago, before competitor Meerkat went viral. Two outlets, Business Insider and Re/code, are reporting the news from their respective anonymous sources, but there’s no word on how much the deal closed for or confirmation from Twitter (I’ll […] Twitter has reportedly acquired livestreaming app Periscope originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
We won’t get to test battery life for the Apple Watch until it lands on wrists starting on April 24. CEO Tim Cook didn’t go into much battery detail at the Apple Watch keynote, merely promising “all-day battery life,” which apparently means 18 hours, according to Cook. But […] Apple Watch will take 150 minutes to charge fully originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Even as belt-tightening has led the New York Times to close sections and shed reporters, the Gray Lady is spending large sums on legal bills to fight a patent troll that claims to own the rights to sending internet links via text message. The Times has been fighting the case […] New York Times’ 5-year fight with patent troll may cost millions originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Saudi Arabian billionaire tech investor Prince Alwaleed got the rumor mill going when he put out a statement saying he met with the CEO of Snapchat and the two discussed co-operating on a potential business deal Snapchat CEO meets with Saudi investor Prince Alwaleed bin Talal originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Apple answered questions about its first smartwatch at a media event in San Francisco on Monday and now we’ve got some answers on how much Apple Watch will cost and when you can buy one. Apple Watch pricing depends on which size you get. The 38-mm Apple Watch Sport […] Apple Watch ranges in price from $349 to over $10,000; on sale April 24 originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Apple Pay is accepted by 700,000 retailer locations in the U.S., and the iPhone-embedded payment service now loads cards from 2,500 card issuing banks, CEO Tim Cook revealed at the kick off of Apple’s Spring Forward Event on Monday. That’s pretty astonishing growth considering Apple was […] Apple has tripled the number of stores accepting Pay in 5 months originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
We’re finally getting to some of the promise of connected health with the launch of ResearchKit, a framework announced at the Apple event Monday that allows medical researchers to take advantage of the data gathered by the iPhone to help advance their own diagnostics or studies […] Apple launches ResearchKit to bring your data to medical research originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Although most of the attention at Apple’s special event on Monday will be on the Apple Watch, the company still had a little treat for Mac fans. As expected, Apple launched a 12-inch MacBook on Monday, and it will cost $1,299 or more when it starts shipping on […] Apple debuts a thin, fanless MacBook that comes in gold originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
HBO CEO Richard Plepler took to the stage at the Apple Watch event Monday to officially announce the launch of HBO Now, which will debut April in time for the fifth season premiere of Game of Thrones. Apple will be the official launch partner of HBO […] HBO officially announces April launch of HBO Now at Apple event originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Netflix is looking to launch in Spain later this year, according to local media reports that were relayed by Variety this weekend. According to these reports, Netflix could launch in Spain as early as September. The streaming service has reportedly already negotiated rights to launch in Spain, and TV […] Netflix may launch in Spain later this summer originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Tesla is moving incredibly fast getting its massive battery factory built, and plans to have batteries made there as early as 2016 for its Model S and Model X cars. Showing off how much has already been built at the site, Tesla investor Steve Jurvetson of […] See Tesla’s massive battery factory under construction originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Two pilots aboard a solar-powered aircraft took off at 7:12 a.m. local time from Abu Dhabi for the first leg of what they hope will be the first complete solar-powered circumnavigation flight. If all goes well, Solar Impulse-2 should take about 12 hours to reach Oman (a […] Solar-powered plane takes off for round-the-world flight originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
Amazon is making efforts to provide highly predictable performance outputs and to match its C4 family’s price-performance with that of its earlier generation C3 family. Comparing Amazon EC2’s C3 and C4 Families originally published by Gigaom, © copyright 2015. Continue reading…

Read More...
posted 6 months ago on gigaom
MIT wants prospective students to know that it’s fully aboard the drone bandwagon, parlaying an army of delivery drones (and computer-generated imagery) for a tongue-in-cheek admissions video, complete with Wagnerian orchestration. MIT sends acceptance letters to its next class of freshmen on Pi Day, aka March 14, or […] MIT wishes it could deliver acceptance letters via drone originally published by Gigaom, © copyright 2015. Continue reading…

Read More...