Author: gordon

  • TED: Future of Mobile With Henry Tirri, Head of Nokia Research [INTERVIEW]

    Ads by Google
    Don’t Be Shy. Try Forex.Get to know GFT. Check it out our Risk Free Practice Account.
    www.GFTforex.com

    Disclosure: Nokia is a sponsor of Mashable’s TED Channel

    We had a chance to sit down at TED with Henry Tirri, Senior Vice President and Head of the Nokia Research Center, to talk about what the mobile landscape of the future holds. Read on to find out what we might expect from mobile technologies within the next five to ten years.

    Q: Can you tell us a bit about what you do at Nokia?

    A: I’m heading Nokia’s long-term research globally in our labs worldwide, from Santa Monica and Palo Alto to the easternmost lab in Beijing, and everything in between: Cambridge, UK, Los Angeles, Switzerland, and teams in Nairobi and Bangalore and so on.

    Q: What emerging technologies do you see playing the biggest role in the next five to ten years: augmented reality, voice recognition, etc.?

    A: Those two things are more user experience technologies, but you’re correct. We also talk about “mixed reality” — the terminology can be confusing, but there is a distinction between augmented reality, where I’m looking at reality and add information to that from the digital world, and mixed reality which means you can do vice versa also, and put things into the virtual world from the real world. To me it’s obvious that it’s such a natural way of looking at the world and interacting with it.

    The key question is how simple and how immersive it becomes. My prediction is it starts with rather isolated services like search and navigation but by the end of the day it becomes part of the interaction. You don’t any more find it extraordinary that you can see the real picture and you get some digital information too or vice versa. And it might be visual digital information, or in audio, or even sometimes in sensing. If you’re talking about a five or ten year spectrum, we’re probably going to have some kind of haptic and sensing way of navigating and getting feedback.

    All of this is a very Western view: The high end, cool things for those living in the “geek world.” But if you ask me then about growth economies and the emerging markets like Africa, India, greater China, Latin America and some parts of Russia, I would say that the experience and emerging technologies tend to have a different nature because of the constraints you have. You might not have the infrastructure to support data, for example.

    So from an interface perspective, speech and gestures are very important there. But emerging technologies are not necessarily always related to the user experience, so things like energy-efficient networking are also a necessity in growth economies. Protocols like SMS are being used in these areas for things we wouldn’t dream of doing with it here because we have access to broadband. There are the “hundreds of millions” who are doing all these very sophisticated and cutting edge things, and at the same time there is emerging technology for the “billions” which can take a different track.

    Q: Do you think there will be an upcoming involvement with biology? Are we going to bring these devices into our bodies? Will I have a phone in my wrist?

    A: Yeah, chip embedding is already an old idea in computer science so we’re ready for that. I think there’s a natural continuum from biosensors — we already have heartbeat sensors connected to a wireless device and measuring you for sports and wellness purposes. So again, if you talk about the five to ten years era, the questions there are more related to the sensors. In some areas, the sensor development is slower than one would think. Mechanical sensors are faster, but chemical sensors are much slower, so even in the five to ten year domain, certain things are not so easy to do.

    When you talk about implantable electronics, you start having … challenges with your biological rejection mechanisms and other problems for medicine to solve. I would say in five years it doesn’t become big, but in ten years I would be surprised if we’re not seeing a lot more of it. Five years is surprisingly fast, because when you think about large scale deployment of something, there’s a delay factor involved in getting the manufacturing process to be reliable and cheap enough.

    I do believe health and wellness-related things will become part of our life, and may probably also merge with augmented reality too. Your body state will be communicated to somewhere, or you can start getting metadata and remote analysis on yourself.

    Q: How important do you see cloud computing being for mobile, now that we have an increasing range of devices we cart around with us and are looking for a more seamless experience between them?

    A: To me, the cloud has become, and will become, a much broader notion than a server farm sitting somewhere and doing something. So the cloud architecture will expand to more devices and the question is more of the seamlessness in actual usage. You may not even know occasionally what is computed close to you physically and what is computed far away.

    There are two issues: One is energy. Sending information bits takes more energy than computing them, which means local computing consumes less energy. This is absolutely so fundamental that it will define the future of how our networks will be built. It implies that the cloud has to have a distributed architecture, because it will be too costly energy-wise for billions of people to be transmitting data. I’m not talking about the bandwidth problem — this is much more fundamental. Regardless of how much bandwidth you have in the dynamic user spectrum, you will still face this problem.

    The second problem is sociological, which is privacy. People are much more positive about something physically close to them and physically in their possession because they feel like they have more control over it. You believe that if your personal metadata sits in the device, it’s better than to let it go away to some nameless server. So there will still be parts of metadata and bits of information sitting close to you for these sociological reasons.

    But the cloud itself will expand, and I think the term will eventually disappear. It will just be our default network architecture.

    Q: Do you think people’s notions of privacy might change over time too? I’m thinking of Facebook pushing on people’s privacy, Google taking Gmail more public with Buzz…

    A: Yes, and my views on this have evolved a lot over the past 20 years. One dimension is that privacy is culturally dependent, so privacy in growth economies looks a bit different from privacy in the Western world. And even in the Western world, there are different approaches to privacy in Europe and the U.S. In Europe for example it’s very much regulatory — Germans don’t like Google Street View so they banned it. In the EU there’s a lot of regulatory resistance. In the U.S. it’s more like a community movement, “we’re going to make it public that you’re evil.” So it’s a different approach. Asia is somewhere in between.

    There are also very contradictory arguments that have been presented to me on whether there’s a generation gap or not. Some say young people put more things up on Facebook or publish things people in my generation would never publish. I’m not totally sure if the generation gap is the right thing to ask. I think it’s more of a question of how much the technology is a part of your life, and it doesn’t as much matter what your age is, although there might be a correlation between the two.

    I think it’s complex to predict how people will react, and if there will be negative consequences. Privacy is always considered with respect to the tradeoff you get in terms of utility. If one or two people didn’t get a job or get fired because of something embarrassing they posted on Facebook, but there were 100,000 people that were recruited because of their Facebook presence, how does the judgment come down regarding privacy? Privacy is always relative to the benefits you get, so if people see enough value in sharing and feel safe enough, privacy isn’t the same question anymore. There’s no simple answer — privacy is an evolving factor.

    Q: What do you think of the renaissance of the tablet form factor, and will we see another range of devices occupying this middle ground between smartphone and laptop?

    A: I’m a computer scientist and have been hacking with computers for 40 years, so I’ve seen the development from mainframes to mini-computers to PCs to laptops to PDAs. The sarcastic comment is that all of them are “fads” to some degree, they come and go and the form factor changes. But each can be a decade or two decades or more in popularity. On the other hand, the only thing that has really disappeared is mini-computers. Mainframes still exist, PCs still exist, and so on.

    I don’t think the tablet will “kill” anything — I don’t think it’s strong enough. I would almost think that tablets and netbooks might see convergence. I don’t think the tablet will become so dominant that you will drop your laptop or netbook and use it as your only device.

    Q: How will the advent of 4G change the computing landscape? Will we see new types of applications become possible?

    A: This is the capacity question, and right now data-intensive applications cause bandwidth challenges. The interesting thing is we have tolerance thresholds for new features, where we want to keep doing things as long as it’s fast enough, but if the performance is below that threshold, we’ll just tinker with it for a bit and, and I think real-time online media streaming will become more prevalent.

    Right now the latency time is not good enough. You can’t have 20 million people streaming their personal video streams around the world in real-time right now — that is not possible yet, but will become so. There will definitely be new applications emerging — it won’t just be the old ones getting faster.

    Q: In terms of online media streaming, do you think that’s going to change things on the content provider end of things? There’s a user behavior issue to confront too, and I think about how hard things like mobile TV have struggled to take off. How many people really need to watch TV while they’re walking to their car?

    A: That’s again extremely culturally-dependent too, looking at places like Korea that have had mobile TV for years. But for me, the real-time media streaming is more about the popularity of sharing your own personal experiences, like your kids playing soccer or when you’re out with your buddies at the bar. That’s a different thing from traditional content; for one thing it’s snippets so it tends to be shorter, but it’s also participatory and it’s human nature to want to exhibit yourself. It becomes a form of expressing yourself, and that will always be popular. And there’s always a long tail of people who are interested in you expressing yourself.

    I think the most difficult thing is scale, so something like Twitter is interesting when you have few followers, and it’s great when you have 2 million followers, but if you have something like 10,000 followers it’s more like, “what do I do?” They are not my buddies anymore — I don’t know 10,000 people, and on the other hand I’m not famous like someone who has a million followers. I believe in this idea of federated local community: It’s good when you have this small audience, and federated means you have a common platform and you can actually reach things globally. There’s a certain community that is local enough in a network sense — not necessarily a geographic sense — to want to follow you.

    Q: That makes a lot of sense, especially considering the landscape of user-generated content on the web — that’s a lot of what people want to share.

    A: Yeah, they just want to share and if there’s an easy way of doing it and there’s a general platform, they will do it. Because there’s always some people who want to follow it.

    Q: How far along are we in terms of bringing mobile and artificial intelligence together?

    A: People talk a lot about intelligent agents, but I think in a computer form factor it doesn’t make that much sense. Think of the annoying Microsoft Office clip guy that no one wanted. The devices we’re talking about are much more personal, so if you can get help when doing real things and interacting in the world, it becomes more persuasive and appealing to have an intelligent agent or avatar type of thing.

    The greatest intelligent agent’s behavior can be specified by a good secretary, who can predict a lot of the things I do, can handle a lot of tasks and information flow, and only checks on the things which are important for me. People want to do this and there’s a lot of development around it, but it faces the same problems that any AI activity does: Any time we introduce an automized way of doing something, our own cognition changes to a different abstract level to assume that.

    When there’s a more intelligent layer in a device or in software, we start using that in a different way. This is very fundamental and has nothing to do with mobile devices specifically. But I truly believe there’s a good place for AI — we have elementary things in navigation assistants already that can provide intelligent traffic information. There’s actually a lot of hidden intelligence already and machine learning is already used a lot.

    Radio technology will be using AI techniques too, in a deep and unseen way. Dynamical allocation of the spectrum based on availability has deep machine learning components — it has to learn to predict when certain spectrum is available and so on. So there is a lot going on, but it isn’t necessarily always as sexy as the intelligent assistant everybody is looking for.

    Q: As location-based services become more and more popular, do you see any killer apps emerging?

    A: The first things that come to mind are local search, really relevant search results based on your positioning. Social search is another no-brainer, because you want to start finding people based on physical proximity because it doesn’t make any sense to go to the bar with someone far away. These are no-brainers and they will be very big.

    The things people don’t usually think about with location-based systems are aggregate things like traffic information, and collective information about air pollution and other environmental data. In growth economies there’s a need for health-related and epidemic information collection. Mobile devices are key to monitoring things like this because they are globally prevalent and always where we are. They will enable us to aggregate data and get information that would otherwise be very difficult to get — I call these aggregate services.

    The pollution example is a very good one. You can start to get real-time information about the environment — your exposure to pollution in LA for example. We did this in traffic already, so think about generalizing it to weather, pollution, and others. The platform allows people’s position combined with something measured, and that gives us a new world.

  • ControlZ, the post-rapture pawn shop

    Armando Morán likes your comment: “I say we capitalize on these ignorami…”
    4 minutes ago

    …but in a nice way so that they learn to think and understand before blindly following…for next time

  • Xerox CEO: Our schools lost focus – Video – Business News

    Sensible, logical, clear thinking CEOs are panic stricken that we DO NOT have a pool of candidates available at all in this country to fill their current openings. Is public school #education reform too little, too late? does anybody even care enough to do something or will we just wait for the crisis to hit the rich and powerful and hope we can “start over”? When will we wake up to what is really happening in the broken social systems all around us?

  • Disrupt Education | Big Think

    Herrmannebbinghaus

    What is the best strategy to learn / memorize? Taking a look at Amazon you will find a wide variety of books on that topic and I am pretty sure that the strategies described in those books will be effective, some more and some less. But the biggest enemy of effective learning can’t be taken away by applying those strategies because it’s something that is fundamental and essentially more important than having a strategy: motivation or or in other words the lack of motivation many learners experience.

    Face it, the vast majority of mankind gets easily distracted and bored and learning is usually not at the top of the list for most people when they think of fun activities. The thing is, pundits like Herrmann Ebbinghaus for instance already came up with proven methods how to learn more effectively and even more importantly how to memorize the information learned over a longer period of time. Figuring out how to memorize best is not a all a new thing or recent phenomenon.

    The problem in those days was that in order to be effective you needed to set up a very organized and rigid learning schedule without a computer as Ebbinghaus published his book “Über das Gedächtnis” / On Memory in 1885.

    Fast forward to the PC era. You now had software that worked based on spaced repetition methodology but the PC itself was still not an essential part of our lives, it was basically a better typewriter. That began to change with access to the Internet and really took of with web 2.0. All of a sudden the PC became the place where we find information, not old media like TV, and the place to connect with our friends and family, not the telephone anymore. But still, the PC was locked in our homes.

    With the rise in smart phones, relatively cheap data plans, the cloud and applications we are now enabled to do what we want or need to do anywhere at any given time which also includes learning.

    Mobile technology and applications based on research carried out in the field of memory and cognitive science have the power of enabling a large group of people to learn effectively for the first time in history.

    Up to now learning in school or college was based on tradition and lesson plans and undoubtedly some well-respected methods but without real scientific evidence that the way we learn is actually the best way we could learn. It’s just the way some people decided on and we have always done it ever since.

    Implementing scientific methods into a standard curriculum is of course a hard thing to do and humans will always try to work a way around strict rules and regulations and yes new things often appear to be a little scary to not only a handful of people as humans tend to feel comfortable with what we have accepted as the standard, the way to do certain things. That is true for both the learner as well as the teacher side.

    Applications cannot be changed and in most cases we don’t even know how the method which forms the basis of an application for instance actually works as most of us don’t know how a computer actually works. The important thing for the user is that it works, in the case of learning applications the important thing is that I learn and memorize information.

    If those applications are then sugar coated in form of edutainment / gamification features that trigger our inner will for winning and competing with each other, effective learning is just a mouse click or tap of a finger away.

    Here are some mobile and web applications that are based on scientific research.