Big Data

Live Blogging from the Cognitive Colloquium

Share this post:

gurubanavar05:00 PM
Guruduth Banavar, Vice President of Cognitive Computing, IBM Research

Wrapping up, there are three things I want to highlight….

First, concerning neuroscience:

The analogy we draw from neuroscience and the human brain, the inspiration we get from the brain, has driven a lot of thinking. This is an area the I want to explore more.

We talked about how cognitive systems need to aware of themselves at some level to be effective in collaboration. They have to know their limits—what they can and can’t do.

The second point concerns cognitive technologies:

The idea of knowledge engineering came across loud and clear to me. It came across in every session. We need to train people to understand data, domain knowledge and the language of domains. These are huge areas that will redefine jobs and professions.

Third point, about the combination of cognitive and immersive systems:

The concept that stood our for me was the importance of human scale. EMPAC can give us an opportunity to do experiments to explore this topic and find out what will work in a practical setting.

—–
03:45 PM – 04:30 PM
Panel Discussion: Ronny Fehling, Vice President, Vice President and Head of Data Driven Technologies, Airbus Group
Joel Parker, School of Medicine, UNC-Chapel Hill, Cancer Genetics
James Hendler, Director, Institute for Data Exploration and Applications, RPI
Aaron E. Walsh, Director of the Immersive Education Initiative
Session Chair: Jonathan S. Dordick, Howard P. Isermann Professor of Chemical and Biological Engineering, RPI

Dordick: We don’t know enough about biology. How do you fill in the gaps int he data using cognitive and immersive technologies.

Handler: We’re trying to figure out how to bring data together from different groups and professions. They often have different definitions for things. We’re using machine learning. We’re seeing what people have in common that survived cancer. We’re using the data to figure out where the science may like, and turn it over to the scientists to say how can we solve some of these problems.

Dordick: Ronny, there are as many parts in a human cell as in an aircraft. What technologies are you using in aircraft design.

Ronny Fehling 2013-06-004-1 gimp3Fehling: We have some amazing models to build and test airplane sand to simulate them. We collect information during flights from sensors. What we try to do on the cognitive level is test out different scenarios we can play out in real time.

In the design phase, we try to visualize the design and have the system see where there are limits to what you can do.

Dordick: Aaron, how do you see cognitive and immersive systems impacting the classroom of the future?

Walsh: Our goal is to use technology to better engage students in learning. We use virtual reality technologies. I meet them in a virtual space. We also use augmented technology, simulations and learning games. What’s missing is artificial intelligence.

When it comes time for the student to ask questions, AI hasn’t been able to do it. But cognitive computing has the potential to create personal learning experiences—like you have a professional guide to learning.

James HendlerHendler: With our Mandarin project, we’re adding intelligent agents into it. We’re teaching Mandarin in a new way, immersing the students in learning so they can learn faster. The students are helping us develop the technologies for their course. We’re using narrative and storytelling techniques.

Walsh: You learn better from the learning experience if you help to design it.

We’re building virtual reality history lessons, and having the students construct their own immersion experiences. Not every student can do this, so we need cognitive systems to help.

Dordick: At Rensselaer, our theme this year is resilience. Ronny, how do you address the resilience of your supply chain at Airbus?

Fehling: If we have problems with the supply chain, it can hold a plane at an assembly station for four hours or up to a week. So we have to be to anticipate problems. We have to monitor and see outliers so we can predict and be alerted and head off problems. We can use these cognitive and immersive technologies together on this.

Handler: We’re looking at the resilience initiative. So much of the supply chain work is based on economic factors. What really causes the problems is when two or three things go wrong at once, or when second- or third-tier suppliers have a problem. We look at cascading failures. We track all the factors that affect our supply chain and use that in our model and in our planning. We use cognitive technologies for this. We have cognitive planning technology.

Walsh: The idea of supply chain and cascading problems can be applied to the learning experience. Most students have gaps in knowledge. When they get to a certain point, you stall. But if a system can identify the gaps you can fill them before they become problems.

You could develop systems for education that are similar to the ones that help improve supply chains.

Audience: How do you deal with the risks and liabilities of using these technologies.

Fehling: We’re trying to decouple the mission critical systems from the cognitive systems. We’re struggling to define the interface.

Handler: We have to help the policy makers deal with some of these new technologies. The payoff from using these technologies can be huge, but we have to help the policymakers and the students deal with the legal issues.

——

04:15 PM

Healthcare and Other Applications
Keynote: Joel Parker, School of Medicine, UNC-Chapel Hill, Cancer Genetics

In my field, we have tons of data, but we still have many unanswered questions.

Our big source of data today is the Cancer Genome Atlas, a large collaborative study across the US and Canada to understand the molecular and genomic basis of cancer. It includes data from DNA sequencing and RNA sequencing—from about 10,000 people. Now we’re going through and mining the data for insights.

We’re looking for mutation rates, mutated genes, clinical features and how the tumor is evolving over time.

Using the information, we’re breaking cancers down into subtypes so we can produce more personalized treatments.

It’s clear that genomics can provide valuable clinical information. We can model it to extract the necessary information for decisions.

Cognitive computing can be helpful here.

We and other institutes are now proactively sequencing the genomes of current patients. This way, we can correlate molecular alterations with clinical outcomes.

There’s no way we can handle all this data. Our hope is that Watson can.

——–

3:00 PM
Panel Discussion: Anil Ananthaswamy, Consultant, New Scientist magazine
Dr. Wayne Gray, Professor of Cognitive Science, RPI
Dr. Gary Marcus, Professor of Psychology and Neural Science, New York University
Session Chair: Dr. Guillermo Cecchi, Research Staff Member, Computational Neuroscience, IBM Research

Cecchi: Is a notion of self essential to having a quantum leap in cognitive systems? And what can neuroscience teach cognitive systems?

Gary MarcusMarcus: I think self is optional for a lot of things. These so much we have to do, even in natural language processing. Our phones don’t yet really know what we’re talking about in a deep way.

I’m more interested in the machine having a representation of me. It can do that without having a representation of itself.

Self is optional, at least for the next decades.

Gray: I think in terms of cognitive systems more than of selves.

Ananthaswamy: The self doesn’t need to be self conscious. It’s a system that has a boundary and is aware of it. All systems will have that. But does the AI become aware of the self having a self.

In the control system of a robot, it will have a mechanistic understanding of itself—what is does and what it can do.

Already, a self driving car has an understanding of itself—what it does, the rules it follows.

Ananthaswamy: We humans have value systems. The AI systems will perceive them and may react to them. But you can also program in the value systems so they know how they’re supposed to behave.

Guillermo CecchiCecchi: Neuroscience is a very large area. A group of people are trying to reconstruct the neural activity in the brain and use it as a model for computing. Others are pursuing other approaches. What direction should we go?

Marcus: There’s a cognitive science approach and a neuroscience approach. What can we learn from them to help us design smart machines. Can we look to humans for clues? The answer is next.

Can we look at the neural connections? Some are doing that. But it’s incredibly difficult.

We’ll need a lot of AI to help us understand the circuits of the brain—so we can use that to design computers.

Other AI research is based on getting a better understanding of human cognitive processes. I think it’s more likely that we can make progress with that.

I like Minsky’s idea of the society of minds. We are a collection of ideas and impulses. We need to be careful about how much we use humans and our brains as models for smart machines.

We don’t want AI systems that are as impulsive as a teenager.

Cecchi: Superintelligent machines don’t have to be copies of the human brain…

Marcus: We don’t have to build replications of humans. I have two children already. I want machines that can do things that we don’t do so well.

Audience: We think of cognitive systems as having a relative static sense of abilities. We can train them to some extent. But how can we build cognitive systems that gain or lose abilities over time—without a theory of self.

Marcus: Robots can recover injuries and they can get new capabilities through software without having a sense of self. Your phone has a new skill every time you download an app. There’s a system where things are connected and that knows how to deal with new skills.

If your phone starts to wonder who am I and why am I here, then we’ll have to figure out how to deal with this.
anil-ananthaswamy_authorphoto_creditprasadvaidya-300x200Ananthaswamy: A robot might understand that it needs to maintain its self—if it’s arm falls off. It might ask people to repair it or do the repairs itself.

Audience: We are always spewing information about ourselves into the universe. How do we go about figuring out who it is who is spewing out the data. And how to you apply that to machines?

Marcus: You can imagine neuroscience 50 years hence. Maybe you’ll be able to upload your brain and make a backup copy. Then you can use it later if somebody goes wrong with your brain. It might help with traumatic brain injury.

—-

02:30 PM
Neuroscience and Cognitive Systems
Keynote: Anil Ananthaswamy, Consultant, New Scientist magazine

I explore the human sense of self. We’re faced with the opportunity to use cognitive computing to augment our experience of being alive. If we can’t answer the question, who am I, we won’t be able to design intelligent systems that can interact more naturally with us.

It’s essential to consider the body, that we have a sense of ownership. This is my body. We also have a sense of living over time, across time. Is it the same person when we were 4 years old or 10 years hence?

We ask core questions over and over again:

Is there a self or not?

Can the self exist independent of all else?

Is the self is an illusion, then whose illusion is it?

These are the questions that pop up when we ponder the nature of the self.

There are two kinds of unities we experience.

Synchronic unity—you’re sitting in this seat, and seeing me on the stage and listening to me. One entity is having all of these experiences.

Diachronic unity—we can go back in time and think about yourself or you can imagine yourself in the future. They seem to be happening to the same entity, the same person.

The brain and the body together construct the self.

Some neuropsychological conditions challenge our sense of self. Can we look at that and understand how the brain/body make this happen.

I looked at Alzheimer’s, schizophrenia, autism, epilepsy, etc.

Cotard’s syndrome is a condition where people believe that they don’t exist; they’re already dead.

In Alzheimer’s disease, it attacks the “narrative self.” We tell the story of our lives to describe who we are. We tell this story to ourselves. It’s how we think of ourselves. If you take away the narrative, is there still an “I.” Alzheimer’s challenges this idea. Even though people have lost their narrative they still have a self.

People with Alzheimer’s sometimes still perform with proficiency—if the environment they’re in helps them do so. Their are subconscious cognitive experiences. The composer Aaron Copeland had Alzheimers and couldn’t tell you who he was but he could stand up and conduct an orchestra performing his musical composition, American Spring.

If you can engage the whole body in an experience, the learning is deeper.

A person with Alzheimer’s feels pain and hunger. Is the self experiencing those things?

We don’t know what the subjective experience of an Alzheimer’s patient is. The question of whether we have a self if we don’t have our narrative is still open to debate.

The brain helps the body survive. In order to do it, as it evolved, it started modeling the body. It has many maps and models of the body. They should be congruent with the physical body, but sometimes things go wrong, such as in the case of phantom limbs.

When there’s a mismatch, the information in your brain seems to overrule your body. Your body model thinks you should feel pain or sensation in your leg, even though it doesn’t exist, so you do.

In a computing environment dealing with the body and the mind, perhaps the computing environment will also deal with body maps.

I also studied autism. We can observe somebody else and get a perception of what might be happening in their mind. We have a theory about that just by watching them. It’s call the theory of mind. People with autism have an impaired theory of mind.

Autism is beginning to show us that the way a person with autism perceives their own body might be causing their problems with projecting into other people’s selves.

Out-of-body experiences are intriguing phenomenon. This includes the Doppelgänger effect, where you perceive a copy of yourself in front of you. In addition, some people feel like they’re leaving the body and looking down on it from above. The processing in the brain has gone awry. Sometimes the “I” is located outside the geometric volume that is the body.

In virtual environments, it might be useful to create an out-of-body experience, so we need to understand what causes out-of-body experiences.

Some say we’re just a bund of perceptions. Take them away, and there is no I.

Others say the self is nothing but consciousness. There is no person. There are just discrete moments of conscious experience.

We don’t know the answer this. Is there an I? I there a self?

The answer will come down to understanding the nature of consciousness, and we’re very far from that.

 

———-

12:40 PM
Cognitive and Immersive Systems
Panel Discussion: Dr. Andrew Johnson, Director of Research, Electronic Visualization Laboratory (EVL)
Henry Lieberman, Visiting Scientist, Massachusetts Institute of Technology
Marge McShane, Associate Professor of Cognitive Science, RPI
Carlton Sparrell, Vice President, Solutions, Oblong Industries
Richard Linderman, Deputey Director for Information System and Cyber Technologies, Office of the Assistant Secretary of Defense, Research and Engineering
Session Chair: Guruduth Banavar, Vice President of Cognitive Computing, IBM Research

guru banavar (1)Banavar: The goal of cognitive computing is to augment, not to replicate. How to do you see the world of humans and machines collaborating.

McShane: We have to endow the machines with models of the world and how to think about and interact with the world. A machine has to incorporate its understanding of situations into its model of the world, and into the way it collaborates with people.

Lieberman: Humans and machines have to collaborate. We’re working so machines can understand simple everyday knowledge, that you sit in a chair, water is wet. Simple knowledge that a 5 year old would know. We often get tripped up by wanting to rush to the most complex knowledge, and we forget about the simpler but still important stuff.

Andy JohnsonJohnson: There are a couple of different definitions of immersion—being fully surrounded with displays and equipment and data, or smaller scale spaces. Immersion helps us when we have people from different disciplines in the same room or connected rooms. You can have people viewing the same data in different forms and comparing the way they see the data. Groups can get more done in an afternoon than they can do in 6 months, if they’re not together and they don’t have the technology to help them. They get to answers quicker and better.

The rooms are still just part of a continuum. They’ll work alone in their offices. They’ll be doing individual work. So how to you maximize the effect of the immersive environment when it’s available and when people can get together. Having cognitive agents in the room can help us do that. If you get the tech right and the people right, you can solve problems really fast.

Lieberman: You have these moments when you’re all totally immersed in the work. It’s the flow. You don’t need the media, but it helps. You need to add a cognitive to immersion. Everything that’s relevant is there. The agent anticipates your needs and has things ready.

Traditional meetings are not effective. Either everybody gets their chance to “emote and vote,” or everybody advises the leader and they decide. They’re not good for problem solving. These cognitive and immersive environments will enable new ways of considering problems and deciding how to deal with them.

Marge McShaneMcShane: A lot of times in meetings people have intuitions but they don’t have the data. The cognitive agent could have a model of the whole situation, and could watch the people interact. It could tell them if they’re impeded by the “small sample bias.”

Johnson: A lot of cities are putting data out for everybody. It’s a rich platform for a lot of groups—transportation, crime, etc. We have a lot of data, but there are a lot of errors. Uncertainty is an issue we have to deal with. Let’s create more test beds and more disciplines brought together. We can learn about how to deal with uncertainty. How do we express uncertainty? That’s important.

Audience: How do we develop a workforce who can create these things?

McShane: We have to develop a workforce of knowledge engineers. We have a tremendous need for this. There’s a chasm between data and knowledge. Just as we have computer programmers, we will need knowledge programmers.

Sparrell: We have to make the environments intelligent enough so that people who use them don’t have to learn a lot of skills to use them.

Henry-Lieberman-HeadshotLieberman: Let’s stop training people to be machines. That’s the factory model of education we have. Constant testing. We need to train people to be inventive and collaborative. We need to go to exploratory learning—to get the skills that will be necessary to collaborate with machines.

Audience: We see the role of culture in meetings—involving people around the world. On the phone you never hear from anybody except a couple of people who are loud and are talking. But in one of these environments, people could speak in their own language and feel comfortable. You need real time translation.

McShane: Turns out that language is very imprecise, and different languages add difficulty. We need cognitive agents that can actually understand what’s being said and translate it into the other languages in the room or on the conference call. I don’t see a lot happening in this area.

Audience: What about using cognitive systems to help out with large scale democratic decision making processes—at the level of nation states?

Lieberman: We’re working on decision support tools. The great thing about the Web is we can have global commenting. But they’re just long lists of comments. We could use help in organizing large scale discussions so people who come into them can understand the culture and the content of what’s going on.

Banavar: The CISL approach requires multidisciplinary approaches. All the fields have to come together into a nexus of expertise that will take us to the next level.

———

12:05 PM
Cognitive and Immersive Systems
Keynote: Richard Linderman, Deputy Director for Information System and Cyber Technologies, Office of the Assistant Secretary of Defense, Research and Engineering

We have small teams going in harm’s way, but we also have large operations rooms, and we deal with cyber threats, so we have a wide range of uses for new technologies.

The next leap ahead will be getting cognitive technologies involved.

We have limited manpower, harsh environments, the need for rapid response, new mission requirements and medical challenges.

We look to autonomous systems to help us address all of these challenges.

In war situations, we have to move lots of data and make better decisions. We need systems that can help us complete missions even with intermittent communications. We need intelligence about what’s happening and what to do about it.

There’s a new issue, cyber-hardened systems. They have to be resilient to cyber attack.

We have to know what’s in the systems, and how they work. It can’t just be a black box. We want open technologies as our foundations.

Machine learning, reasoning and intelligence will be key capabilities. AI over promised. But recent breakthroughs in deep learning and other areas are showing much better performance.

We’re close to the point where high-performance computing can produce modeling and simulations needed for autonomous cognitive systems.

One area we’re looking at is secure bio-inspired processors that can be placed in pods and attached to aircraft to processing sensor-based data. We’re working with IBM’s TrueNorth neuromorphic processor to get the tremendous power efficiency.

We’re using text extraction technologies that help us read faster than humans can read and digest huge amounts of knowledge.

We’re taking another look at AI now. Interesting in deep learning and in robotics.

The army is actively engaged in robots. We want to shrink them down to bug size, with the intelligence of bugs. There are many places where we’d like to be a bug on the walls.

We’re working on text and audio, imagery and video, and cyber detection.

DoD is confident that leveraging relationships with industry and academia will be necessary to help overcome the challenges we face.

——-

———–

10:45 AM
Fireside Chat: Cognitive and Immersive Systems
Shirley Ann Jackson, President, Rensselaer Polytechnic Institute
John Kelly, Senior Vice President, Solutions Portfolio & Research, IBM
Moderator: Heidi Moore, Business Editor at Mashable

Moore: This sounds like a visit to the future. Which of these applications gets you most excited?

jackson_shirley-240x300Jackson: Healthcare. Addressing the challenges here and all around the world. Education. That’s thrilling. I’m also focused on intersecting vulnerabilities and cascading consequences, such as natural disasters. The tsunami in Japan. Think about the design of nuclear plants and associated infrastructure. You want to bring in historical data and realtime data. You can do things differently.

Kelly: Some of the recent results the team has gotten from studying mental health problems. We’re analyzing speech to understand patterns and find early signs of neurological diseases. You can detect early. Also, by studying neuroscience, we might be able to accelerate learning—not just learn more but faster.

Moore: How did this get started?

Jackson: John and I hatched this plot a year and a half ago at a dinner.

John: We have had a tremendous partnership. We knew something special was going on in human-computer interactions. Rensselaer had built this great technological arts center, EMPAC. I saw we could work with faculty and students and do immersion at scale.

Moore: We see algorithms all around us, Google, Facebook measuring traffic. Why has it taken so long for AI to take off?
john-kelly-bio1-300x300Kelly: AI goes back to the 50s. A lot of experiments failed. We were in an AI winter. It was before its time. But with Watson enough data was digitized. You could do something meaningful with a system. Also computing power had improved dramatically. Also advanced in machine learning and neural networks had come along as well. It was a unique point in time.

Jackson: We’re imbuing computational systems with reasoning; bringing in social sciences to make machines more human like. We can build very sophisticated models. Also advances in things like machine vision. We can bring multimodal and multi sensory inputs into cognitive systems. Bringing data of the web. This is a unique moment. It’s a question of how we use it.

Moore: There’s fear about the rise of machines. How do you deal with that?

Jackson: When you look over history, people have worried about each advance in technology. There was great fear of cars and aircraft. Nuclear power.

Any time one uses anything other than ones bare hands, you’re using a technology—from a pencil to a supercomputer. We should think about how we use technology. It’s up to us to use it for the kinds of purposes we talk about. So we build into our education discussions of ethics in the development and use of technology.

On the job loss. It’s always a big deal. But you see how we migrate as things change. As technologies change they change the nature of work. the challenge is to keep people abreast. We can use these technologies to prepare our students for the jobs of the future.

Kelly: It’s back to this human-and-system idea. In every industry, there’s a range of abilities among the people who do the jobs. If you introduce this technology properly, the very good people become even better, the norm become as good as the best and the people who struggle become better at their jobs. We see people getting even better wherever we insert these technologies.

Moore: Others have said we should think about the rise of the machines because of how humans will program them. How should we be approaching these problems.

Jackson: We have to talk about intent. As we educate the students who develop and use these technologies, a key part of their education has to be focused on the responsibility they have. We can’t make everybody an angel, but we can teach them about their responsibility and the consequences of their actions. Our students want to make a positive difference and solve global challenges.

Kelly: In IBM, the people creating these systems, are very conscious of the strengths and weaknesses of the systems. In healthcare, we always want the human involved. Watson has blind spots. It doesn’t yet consider quality of life considerations. In critical situations, you have to keep human beings in the loop or in the front of the loop.

Moore: What industries have embraced this the most?

Kelly: Every industry has reached out to us. We’re sold out. We started with healthcare and physicians. They saw early that this would change healthcare. I’m interested in bringing Watson to bear in security and cyber security.

Jackson: We have to use cognitive computing in analyzing climate change and climate effects, and the use of resources. Also we need to use it in analyzing the financial system, to spot problems that are developing or understand why something happened.

Audience: If two Watsons met each other, would they greet each other?

Jackson: They will meet here.

Kelly: We created several systems, and then we moved Watson to the cloud. Also, originally Watson was one service, it’s now 26 services you can access on the cloud. We also have discovered that the data is so important that there are industry-specific Watson. A healthcare Watson might meet a security Watson.

Audience: There’s good an evil in society. Will you have to build in moral codes? Will you have an override function.

Kelly: We’re studying putting policy engines into Watson, instilling some of our values into the system. We debate because we’re introducing human biases. Would it still be Watson, which is objective and fact based.

Jackson: We’re doing work on establishing trust in machines—not to displace the imposition of human values but to understand how they might be expressed in computer systems.

Kelly: We have seven languages for Watson. It’s learning how we think through the language. That may affect how they think.

Audience: Would a cognitive system be applicable to the criminal justice system.

Kelly: It could bring all facts around a case and case law forward. It won’t be the judge. It would inform the decision maker. Bring the facts forward in an unbiased way.

Moore: Remember, Rolling Stone had to retract an article about the University of Virginia. Imagine a machine being able to draw information from many sources, in real time. It can be part of the editorial decision making—with the editors, the writers. It won’t remove bias but might bring in a broader perspective before something is published.

Kelly: In oncology, the specific chemo treatment is discussed by a tumor review board. They have bias. They don’t have all the information they need. Watson can bring it it. Watson can participate in the discussion and raise issues they haven’t thought of.

Audience: Is bandwidth becoming an issue in collaboration?

Kelly: The interconnect speed is the bottleneck. As a result, more compute is happening where the data is. We’re moving the processing, the Watsons, to the data. We’re sending the insights, not all the data.

Audience: In decision making, in innovation. There are a lot of impediments to progress from bias and power structures, overcoming conventional wisdom. How do you help insights bubble up from the bottom.

Kelly: One of the things we could put in Watson is what we think is “ground truth,” or “human truth.” We could get answers more quickly, but we could get more bias and wrong answers. I think we have to minimize human bias and ground truth, and Watson will be a more powerful force.

Audience: What about individualize cognitive systems who understand you deeply and can advise you? Is that something you’re thinking about?

Kelly: Think about the aging population, and having personalized cognitive systems in your home that assist you but measure your cognitive abilities based on your speech and behavior. We’re working on this. It’s a very interesting individualized us of the technology.

Jackson: One can imaging having a system with facial recognition linked to a data base, that might tell you if a certain individual is an unsavory character. It would tell you to stay away.

The multimodal work we’re doing here plays to this. We’re using different sensory modes. This can help the systems respond to us individually.

——

John Kelly, Shirley Jackson, and Hui Su, director of the CISL, perform a virtual ribbon cutting on stage. Huge scissors, huge ribbon, huge screen behind them. Lots of virtual confetti on the screen.

——-
john-kelly-bio1-200x20010:00 AM
John Kelly, Senior Vice President, Solutions Portfolio & Research, IBM

I remember when the Watson project got started. It was called Project Bluejay. in 2007, a group of researchers wanted to take on the problem of deep questioning and answering.

I said don’t you understand we’re in a winter of artificial intelligence.

They told me there were going to build a system, call it Watson, and put it to the test on national television.

I said I don’t think so.

But they want on to create and succeed with the first-ever cognitive system, winning with Watson on Jeopardy.

The team didn’t set out to create a form of artificial intelligence to replicate what humans can do. It was about brining insights to humans, not replace them. This was a big thought.

I think we’re addressing the problem—the price of not knowing. Every industry is being swamped by information. With cognitive technologies we can unlock tremendous new insights and create new value.

You see it in the Internet of Things, security and in healthcare.

Incredible amounts of information, but most goes into storage.

Think about the cost of not knowing in Paris.

So much data likes dark, lies dormant. We don’t use it. Our machines don’t understand it. Think about the value we can draw from it.

Every industry is being swamped by data. Most isn’t being used. It’s lost or put in deep storage. The cost of not using that information is of the charts.

The original Watson was the beginning of the cognitive era. We don’t reprogram it. We feed better data. It learns over time based on a feedback loop.

The system was great at natural language. Now we have give Watson the ability to see images.

Now, what’s going on between the human and the system. We humans have some great skills. Machines have their own set o capabilities, not just for math. It’s the combination of human and machine that is the magic. Human plus machine will beat human or machine at almost every task.

When humans hit a roadblock and Watson comes in, we make program. When Watson hits a roadblock, the humans come in and help push forward.

Cognitive systems learn at scale,reason with purpose and interact with humans in a natural way.

Let’s rethink how we do things.

Let’s rethink the call center. Don’t replace the people. Put Watson in the call center to help.

Rethink sports. We’re working with companies that put sensors in helmets and can detect whether the athlete has been injured.

Rethink insurance. It’s a complicated field. Today with a large insurer, if you go on line and apply for auto insurance, Watson will help you through the process. It knows the rules and it knows you.

In oil and gas, there’s a huge cost of not knowing were to drill. Watson can help.

Rethink medical imaging. Watson can help radiologists view thousands of images and improve diagnosis and hopefully save lives.

Rethink genomic medicine. We hope to use Watson to make sense of the massive amount of genomic information—to spot individual mutations and individual cancers—to find better treatments.

In education, we have a chance to finally change the way we educate our children and adults in the world environment. We can understand how every individual is learning and adjust the teaching to the way the learn best. It’s individualized education.

Interacting with humans at scale will be a game changer. We started working on this soon after the original Watson competed on TV. They were small experimental labs. Then we expanded to rooms with large screens and walls for visualizing data and interacting with Watson.

We were onto something, but we needed a bigger platform where people were working on immersive technologies and we could work with them at scale. That’s why we tied up with Rensselaer. We had to be here in the EMPAC.

I think the next stage of our partnership, cognitive computing at scale, and I think it will change the world.

——

jackson_shirley-240x30009:30 AM
Shirley Ann Jackson, President, Rensselaer Polytechnic Institute

Four elements are required to change the world: Inspired people, programs, platforms and partnership.

Rensselaer’s Curtis R. Priem Experimental Media and Performing Arts Center is a great platform. Our partnership with IBM dates back decades to the IBM System/360. IBM Senior Vice President John Kelly, who got his PhD at Rensselaer, is an inspired person who has been our partner for decades as well.

We were the first and so far the only university in this country to get a physical Watson system on our campus.

At Rensselaer, we’re taking on major challenges for society, such as personalizing medicine, improving education, and addressing extreme events, such as climate change and terrorism.

Now, with IBM, we’re embarking on another major initiative, the Cognitive and Immersive Systems Lab (CISL). We’ll initially focus on situations rooms such as a cognitive board room, a cognitive design studio, cognitive classroom, and the cognitive medical diagnostics room. These are collaborative environments.

We recognize the great power of cognitive computing to ingest vast amounts of diverse data types.

What makes CISL so ambitious. We’re not just looking at a single cognitive agent assisting a single agent. Instead we’ll enhance group cognition and group decision making. The problems of our day are too complex and too interconnected to be involved by individuals. That’s why we have to enable collaborations.

We anticipate a hierarchy of cognitive agents to assist every individual person, as well as an agent to assist the people as a group.

Cognitive agents will have to be sophisticated enough to be able to understand what’s being said and what’s not being said.

In the rooms we’re enhancing human cognitive in multi modal spaces that take in visual and verbal cues and movements, and present information to people through multi sensory pathways.

The rooms will recognize people by their faces and follow them as they move around the room.

The rooms should be able to follow the flow of the meeting, picking up the mood, biases, power, and even detect when a decision has been made.

We’ll benefit from cognitive computing, high performance computing, and neuromorphic computing.

We’ll be able to glean insights from social networks. We’ll be able to model trust, which is essential in collaboration.

We’ve had breakthroughs in immersive environments, which will come to play. We’ve made advances in panoramic screen technology here at EMPAC. To support discussions among small groups we’ve developed Campfire, a virtual fire pit, where information is displayed and conversation is encouraged.

CISL will bring together humans and machines so machines can handle enormous amounts of data and can enhance the gifts that belong uniquely to humans–courage, creativity, insight and a desire to make a better world.

 

gurubanavar05:00 PM
Guruduth Banavar, Vice President of Cognitive Computing, IBM Research

Wrapping up, there are three things I want to highlight….

First, concerning neuroscience:

The analogy we draw from neuroscience and the human brain, the inspiration we get from the brain, has driven a lot of thinking. This is an area the I want to explore more.

We talked about how cognitive systems need to aware of themselves at some level to be effective in collaboration. They have to know their limits—what they can and can’t do.

The second point concerns cognitive technologies:

The idea of knowledge engineering came across loud and clear to me. It came across in every session. We need to train people to understand data, domain knowledge and the language of domains. These are huge areas that will redefine jobs and professions.

Third point, about the combination of cognitive and immersive systems:

The concept that stood our for me was the importance of human scale. EMPAC can give us an opportunity to do experiments to explore this topic and find out what will work in a practical setting.

—–
03:45 PM – 04:30 PM
Panel Discussion: Ronny Fehling, Vice President, Vice President and Head of Data Driven Technologies, Airbus Group
Joel Parker, School of Medicine, UNC-Chapel Hill, Cancer Genetics
James Hendler, Director, Institute for Data Exploration and Applications, RPI
Aaron E. Walsh, Director of the Immersive Education Initiative
Session Chair: Jonathan S. Dordick, Howard P. Isermann Professor of Chemical and Biological Engineering, RPI

Dordick: We don’t know enough about biology. How do you fill in the gaps int he data using cognitive and immersive technologies.

Handler: We’re trying to figure out how to bring data together from different groups and professions. They often have different definitions for things. We’re using machine learning. We’re seeing what people have in common that survived cancer. We’re using the data to figure out where the science may like, and turn it over to the scientists to say how can we solve some of these problems.

Dordick: Ronny, there are as many parts in a human cell as in an aircraft. What technologies are you using in aircraft design.
ronny-fehling-2013-06-004-1-gimp3-150x150Fehling: We have some amazing models to build and test airplane sand to simulate them. We collect information during flights from sensors. What we try to do on the cognitive level is test out different scenarios we can play out in real time.

In the design phase, we try to visualize the design and have the system see where there are limits to what you can do.

Dordick: Aaron, how do you see cognitive and immersive systems impacting the classroom of the future?

Walsh: Our goal is to use technology to better engage students in learning. We use virtual reality technologies. I meet them in a virtual space. We also use augmented technology, simulations and learning games. What’s missing is artificial intelligence.

When it comes time for the student to ask questions, AI hasn’t been able to do it. But cognitive computing has the potential to create personal learning experiences—like you have a professional guide to learning.

james-hendler-221x300Hendler: With our Mandarin project, we’re adding intelligent agents into it. We’re teaching Mandarin in a new way, immersing the students in learning so they can learn faster. The students are helping us develop the technologies for their course. We’re using narrative and storytelling techniques.

Walsh: You learn better from the learning experience if you help to design it.

We’re building virtual reality history lessons, and having the students construct their own immersion experiences. Not every student can do this, so we need cognitive systems to help.

Dordick: At Rensselaer, our theme this year is resilience. Ronny, how do you address the resilience of your supply chain at Airbus?

Fehling: If we have problems with the supply chain, it can hold a plane at an assembly station for four hours or up to a week. So we have to be to anticipate problems. We have to monitor and see outliers so we can predict and be alerted and head off problems. We can use these cognitive and immersive technologies together on this.

Handler: We’re looking at the resilience initiative. So much of the supply chain work is based on economic factors. What really causes the problems is when two or three things go wrong at once, or when second- or third-tier suppliers have a problem. We look at cascading failures. We track all the factors that affect our supply chain and use that in our model and in our planning. We use cognitive technologies for this. We have cognitive planning technology.

Walsh: The idea of supply chain and cascading problems can be applied to the learning experience. Most students have gaps in knowledge. When they get to a certain point, you stall. But if a system can identify the gaps you can fill them before they become problems.

You could develop systems for education that are similar to the ones that help improve supply chains.

Audience: How do you deal with the risks and liabilities of using these technologies.

Fehling: We’re trying to decouple the mission critical systems from the cognitive systems. We’re struggling to define the interface.

Handler: We have to help the policy makers deal with some of these new technologies. The payoff from using these technologies can be huge, but we have to help the policymakers and the students deal with the legal issues.

——

04:15 PM

Healthcare and Other Applications
Keynote: Joel Parker, School of Medicine, UNC-Chapel Hill, Cancer Genetics

In my field, we have tons of data, but we still have many unanswered questions.

Our big source of data today is the Cancer Genome Atlas, a large collaborative study across the US and Canada to understand the molecular and genomic basis of cancer. It includes data from DNA sequencing and RNA sequencing—from about 10,000 people. Now we’re going through and mining the data for insights.

We’re looking for mutation rates, mutated genes, clinical features and how the tumor is evolving over time.

Using the information, we’re breaking cancers down into subtypes so we can produce more personalized treatments.

It’s clear that genomics can provide valuable clinical information. We can model it to extract the necessary information for decisions.

Cognitive computing can be helpful here.

We and other institutes are now proactively sequencing the genomes of current patients. This way, we can correlate molecular alterations with clinical outcomes.

There’s no way we can handle all this data. Our hope is that Watson can.

——–

3:00 PM
Panel Discussion: Anil Ananthaswamy, Consultant, New Scientist magazine
Dr. Wayne Gray, Professor of Cognitive Science, RPI
Dr. Gary Marcus, Professor of Psychology and Neural Science, New York University
Session Chair: Dr. Guillermo Cecchi, Research Staff Member, Computational Neuroscience, IBM Research

Cecchi: Is a notion of self essential to having a quantum leap in cognitive systems? And what can neuroscience teach cognitive systems?

gary-marcus-272x300Marcus: I think self is optional for a lot of things. These so much we have to do, even in natural language processing. Our phones don’t yet really know what we’re talking about in a deep way.

I’m more interested in the machine having a representation of me. It can do that without having a representation of itself.

Self is optional, at least for the next decades.

Gray: I think in terms of cognitive systems more than of selves.

Ananthaswamy: The self doesn’t need to be self conscious. It’s a system that has a boundary and is aware of it. All systems will have that. But does the AI become aware of the self having a self.

In the control system of a robot, it will have a mechanistic understanding of itself—what is does and what it can do.

Already, a self driving car has an understanding of itself—what it does, the rules it follows.

Ananthaswamy: We humans have value systems. The AI systems will perceive them and may react to them. But you can also program in the value systems so they know how they’re supposed to behave.

Guillermo CecchiCecchi: Neuroscience is a very large area. A group of people are trying to reconstruct the neural activity in the brain and use it as a model for computing. Others are pursuing other approaches. What direction should we go?

Marcus: There’s a cognitive science approach and a neuroscience approach. What can we learn from them to help us design smart machines. Can we look to humans for clues? The answer is next.

Can we look at the neural connections? Some are doing that. But it’s incredibly difficult.

We’ll need a lot of AI to help us understand the circuits of the brain—so we can use that to design computers.

Other AI research is based on getting a better understanding of human cognitive processes. I think it’s more likely that we can make progress with that.

I like Minsky’s idea of the society of minds. We are a collection of ideas and impulses. We need to be careful about how much we use humans and our brains as models for smart machines.

We don’t want AI systems that are as impulsive as a teenager.

Cecchi: Superintelligent machines don’t have to be copies of the human brain…

Marcus: We don’t have to build replications of humans. I have two children already. I want machines that can do things that we don’t do so well.

Audience: We think of cognitive systems as having a relative static sense of abilities. We can train them to some extent. But how can we build cognitive systems that gain or lose abilities over time—without a theory of self.

Marcus: Robots can recover injuries and they can get new capabilities through software without having a sense of self. Your phone has a new skill every time you download an app. There’s a system where things are connected and that knows how to deal with new skills.

If your phone starts to wonder who am I and why am I here, then we’ll have to figure out how to deal with this.

anil-ananthaswamy_authorphoto_creditprasadvaidya-300x200Ananthaswamy: A robot might understand that it needs to maintain its self—if it’s arm falls off. It might ask people to repair it or do the repairs itself.

Audience: We are always spewing information about ourselves into the universe. How do we go about figuring out who it is who is spewing out the data. And how to you apply that to machines?

Marcus: You can imagine neuroscience 50 years hence. Maybe you’ll be able to upload your brain and make a backup copy. Then you can use it later if somebody goes wrong with your brain. It might help with traumatic brain injury.

—-

02:30 PM
Neuroscience and Cognitive Systems
Keynote: Anil Ananthaswamy, Consultant, New Scientist magazine

I explore the human sense of self. We’re faced with the opportunity to use cognitive computing to augment our experience of being alive. If we can’t answer the question, who am I, we won’t be able to design intelligent systems that can interact more naturally with us.

It’s essential to consider the body, that we have a sense of ownership. This is my body. We also have a sense of living over time, across time. Is it the same person when we were 4 years old or 10 years hence?

We ask core questions over and over again:

Is there a self or not?

Can the self exist independent of all else?

Is the self is an illusion, then whose illusion is it?

These are the questions that pop up when we ponder the nature of the self.

There are two kinds of unities we experience.

Synchronic unity—you’re sitting in this seat, and seeing me on the stage and listening to me. One entity is having all of these experiences.

Diachronic unity—we can go back in time and think about yourself or you can imagine yourself in the future. They seem to be happening to the same entity, the same person.

The brain and the body together construct the self.

Some neuropsychological conditions challenge our sense of self. Can we look at that and understand how the brain/body make this happen.

I looked at Alzheimer’s, schizophrenia, autism, epilepsy, etc.

Cotard’s syndrome is a condition where people believe that they don’t exist; they’re already dead.

In Alzheimer’s disease, it attacks the “narrative self.” We tell the story of our lives to describe who we are. We tell this story to ourselves. It’s how we think of ourselves. If you take away the narrative, is there still an “I.” Alzheimer’s challenges this idea. Even though people have lost their narrative they still have a self.

People with Alzheimer’s sometimes still perform with proficiency—if the environment they’re in helps them do so. Their are subconscious cognitive experiences. The composer Aaron Copeland had Alzheimers and couldn’t tell you who he was but he could stand up and conduct an orchestra performing his musical composition, American Spring.

If you can engage the whole body in an experience, the learning is deeper.

A person with Alzheimer’s feels pain and hunger. Is the self experiencing those things?

We don’t know what the subjective experience of an Alzheimer’s patient is. The question of whether we have a self if we don’t have our narrative is still open to debate.

The brain helps the body survive. In order to do it, as it evolved, it started modeling the body. It has many maps and models of the body. They should be congruent with the physical body, but sometimes things go wrong, such as in the case of phantom limbs.

When there’s a mismatch, the information in your brain seems to overrule your body. Your body model thinks you should feel pain or sensation in your leg, even though it doesn’t exist, so you do.

In a computing environment dealing with the body and the mind, perhaps the computing environment will also deal with body maps.

I also studied autism. We can observe somebody else and get a perception of what might be happening in their mind. We have a theory about that just by watching them. It’s call the theory of mind. People with autism have an impaired theory of mind.

Autism is beginning to show us that the way a person with autism perceives their own body might be causing their problems with projecting into other people’s selves.

Out-of-body experiences are intriguing phenomenon. This includes the Doppelgänger effect, where you perceive a copy of yourself in front of you. In addition, some people feel like they’re leaving the body and looking down on it from above. The processing in the brain has gone awry. Sometimes the “I” is located outside the geometric volume that is the body.

In virtual environments, it might be useful to create an out-of-body experience, so we need to understand what causes out-of-body experiences.

Some say we’re just a bund of perceptions. Take them away, and there is no I.

Others say the self is nothing but consciousness. There is no person. There are just discrete moments of conscious experience.

We don’t know the answer this. Is there an I? I there a self?

The answer will come down to understanding the nature of consciousness, and we’re very far from that.

 

———-

12:40 PM
Cognitive and Immersive Systems
Panel Discussion: Dr. Andrew Johnson, Director of Research, Electronic Visualization Laboratory (EVL)
Henry Lieberman, Visiting Scientist, Massachusetts Institute of Technology
Marge McShane, Associate Professor of Cognitive Science, RPI
Carlton Sparrell, Vice President, Solutions, Oblong Industries
Richard Linderman, Deputey Director for Information System and Cyber Technologies, Office of the Assistant Secretary of Defense, Research and Engineering
Session Chair: Guruduth Banavar, Vice President of Cognitive Computing, IBM Research

guru-banavar-293x300Banavar: The goal of cognitive computing is to augment, not to replicate. How to do you see the world of humans and machines collaborating.

McShane: We have to endow the machines with models of the world and how to think about and interact with the world. A machine has to incorporate its understanding of situations into its model of the world, and into the way it collaborates with people.

Lieberman: Humans and machines have to collaborate. We’re working so machines can understand simple everyday knowledge, that you sit in a chair, water is wet. Simple knowledge that a 5 year old would know. We often get tripped up by wanting to rush to the most complex knowledge, and we forget about the simpler but still important stuff.

andy-johnson-241x300Johnson: There are a couple of different definitions of immersion—being fully surrounded with displays and equipment and data, or smaller scale spaces. Immersion helps us when we have people from different disciplines in the same room or connected rooms. You can have people viewing the same data in different forms and comparing the way they see the data. Groups can get more done in an afternoon than they can do in 6 months, if they’re not together and they don’t have the technology to help them. They get to answers quicker and better.

The rooms are still just part of a continuum. They’ll work alone in their offices. They’ll be doing individual work. So how to you maximize the effect of the immersive environment when it’s available and when people can get together. Having cognitive agents in the room can help us do that. If you get the tech right and the people right, you can solve problems really fast.

Lieberman: You have these moments when you’re all totally immersed in the work. It’s the flow. You don’t need the media, but it helps. You need to add a cognitive to immersion. Everything that’s relevant is there. The agent anticipates your needs and has things ready.

Traditional meetings are not effective. Either everybody gets their chance to “emote and vote,” or everybody advises the leader and they decide. They’re not good for problem solving. These cognitive and immersive environments will enable new ways of considering problems and deciding how to deal with them.

marge-mcshaneMcShane: A lot of times in meetings people have intuitions but they don’t have the data. The cognitive agent could have a model of the whole situation, and could watch the people interact. It could tell them if they’re impeded by the “small sample bias.”

Johnson: A lot of cities are putting data out for everybody. It’s a rich platform for a lot of groups—transportation, crime, etc. We have a lot of data, but there are a lot of errors. Uncertainty is an issue we have to deal with. Let’s create more test beds and more disciplines brought together. We can learn about how to deal with uncertainty. How do we express uncertainty? That’s important.

Audience: How do we develop a workforce who can create these things?

McShane: We have to develop a workforce of knowledge engineers. We have a tremendous need for this. There’s a chasm between data and knowledge. Just as we have computer programmers, we will need knowledge programmers.

Sparrell: We have to make the environments intelligent enough so that people who use them don’t have to learn a lot of skills to use them.

henry-lieberman-headshot-291x300Lieberman: Let’s stop training people to be machines. That’s the factory model of education we have. Constant testing. We need to train people to be inventive and collaborative. We need to go to exploratory learning—to get the skills that will be necessary to collaborate with machines.

Audience: We see the role of culture in meetings—involving people around the world. On the phone you never hear from anybody except a couple of people who are loud and are talking. But in one of these environments, people could speak in their own language and feel comfortable. You need real time translation.

McShane: Turns out that language is very imprecise, and different languages add difficulty. We need cognitive agents that can actually understand what’s being said and translate it into the other languages in the room or on the conference call. I don’t see a lot happening in this area.

Audience: What about using cognitive systems to help out with large scale democratic decision making processes—at the level of nation states?

Lieberman: We’re working on decision support tools. The great thing about the Web is we can have global commenting. But they’re just long lists of comments. We could use help in organizing large scale discussions so people who come into them can understand the culture and the content of what’s going on.

Banavar: The CISL approach requires multidisciplinary approaches. All the fields have to come together into a nexus of expertise that will take us to the next level.

———

12:05 PM
Cognitive and Immersive Systems
Keynote: Richard Linderman, Deputy Director for Information System and Cyber Technologies, Office of the Assistant Secretary of Defense, Research and Engineering

We have small teams going in harm’s way, but we also have large operations rooms, and we deal with cyber threats, so we have a wide range of uses for new technologies.

The next leap ahead will be getting cognitive technologies involved.

We have limited manpower, harsh environments, the need for rapid response, new mission requirements and medical challenges.

We look to autonomous systems to help us address all of these challenges.

In war situations, we have to move lots of data and make better decisions. We need systems that can help us complete missions even with intermittent communications. We need intelligence about what’s happening and what to do about it.

There’s a new issue, cyber-hardened systems. They have to be resilient to cyber attack.

We have to know what’s in the systems, and how they work. It can’t just be a black box. We want open technologies as our foundations.

Machine learning, reasoning and intelligence will be key capabilities. AI over promised. But recent breakthroughs in deep learning and other areas are showing much better performance.

We’re close to the point where high-performance computing can produce modeling and simulations needed for autonomous cognitive systems.

One area we’re looking at is secure bio-inspired processors that can be placed in pods and attached to aircraft to processing sensor-based data. We’re working with IBM’s TrueNorth neuromorphic processor to get the tremendous power efficiency.

We’re using text extraction technologies that help us read faster than humans can read and digest huge amounts of knowledge.

We’re taking another look at AI now. Interesting in deep learning and in robotics.

The army is actively engaged in robots. We want to shrink them down to bug size, with the intelligence of bugs. There are many places where we’d like to be a bug on the walls.

We’re working on text and audio, imagery and video, and cyber detection.

DoD is confident that leveraging relationships with industry and academia will be necessary to help overcome the challenges we face.

——-

———–

10:45 AM
Fireside Chat: Cognitive and Immersive Systems
Shirley Ann Jackson, President, Rensselaer Polytechnic Institute
John Kelly, Senior Vice President, Solutions Portfolio & Research, IBM
Moderator: Heidi Moore, Business Editor at Mashable

Moore: This sounds like a visit to the future. Which of these applications gets you most excited?

jackson_shirley-240x300Jackson: Healthcare. Addressing the challenges here and all around the world. Education. That’s thrilling. I’m also focused on intersecting vulnerabilities and cascading consequences, such as natural disasters. The tsunami in Japan. Think about the design of nuclear plants and associated infrastructure. You want to bring in historical data and realtime data. You can do things differently.

Kelly: Some of the recent results the team has gotten from studying mental health problems. We’re analyzing speech to understand patterns and find early signs of neurological diseases. You can detect early. Also, by studying neuroscience, we might be able to accelerate learning—not just learn more but faster.

Moore: How did this get started?

Jackson: John and I hatched this plot a year and a half ago at a dinner.

John: We have had a tremendous partnership. We knew something special was going on in human-computer interactions. Rensselaer had built this great technological arts center, EMPAC. I saw we could work with faculty and students and do immersion at scale.

Moore: We see algorithms all around us, Google, Facebook measuring traffic. Why has it taken so long for AI to take off?

john-kelly-bio1-200x200Kelly: AI goes back to the 50s. A lot of experiments failed. We were in an AI winter. It was before its time. But with Watson enough data was digitized. You could do something meaningful with a system. Also computing power had improved dramatically. Also advanced in machine learning and neural networks had come along as well. It was a unique point in time.

Jackson: We’re imbuing computational systems with reasoning; bringing in social sciences to make machines more human like. We can build very sophisticated models. Also advances in things like machine vision. We can bring multimodal and multi sensory inputs into cognitive systems. Bringing data of the web. This is a unique moment. It’s a question of how we use it.

Moore: There’s fear about the rise of machines. How do you deal with that?

Jackson: When you look over history, people have worried about each advance in technology. There was great fear of cars and aircraft. Nuclear power.

Any time one uses anything other than ones bare hands, you’re using a technology—from a pencil to a supercomputer. We should think about how we use technology. It’s up to us to use it for the kinds of purposes we talk about. So we build into our education discussions of ethics in the development and use of technology.

On the job loss. It’s always a big deal. But you see how we migrate as things change. As technologies change they change the nature of work. the challenge is to keep people abreast. We can use these technologies to prepare our students for the jobs of the future.

Kelly: It’s back to this human-and-system idea. In every industry, there’s a range of abilities among the people who do the jobs. If you introduce this technology properly, the very good people become even better, the norm become as good as the best and the people who struggle become better at their jobs. We see people getting even better wherever we insert these technologies.

Moore: Others have said we should think about the rise of the machines because of how humans will program them. How should we be approaching these problems.

Jackson: We have to talk about intent. As we educate the students who develop and use these technologies, a key part of their education has to be focused on the responsibility they have. We can’t make everybody an angel, but we can teach them about their responsibility and the consequences of their actions. Our students want to make a positive difference and solve global challenges.

Kelly: In IBM, the people creating these systems, are very conscious of the strengths and weaknesses of the systems. In healthcare, we always want the human involved. Watson has blind spots. It doesn’t yet consider quality of life considerations. In critical situations, you have to keep human beings in the loop or in the front of the loop.

Moore: What industries have embraced this the most?

Kelly: Every industry has reached out to us. We’re sold out. We started with healthcare and physicians. They saw early that this would change healthcare. I’m interested in bringing Watson to bear in security and cyber security.

Jackson: We have to use cognitive computing in analyzing climate change and climate effects, and the use of resources. Also we need to use it in analyzing the financial system, to spot problems that are developing or understand why something happened.

Audience: If two Watsons met each other, would they greet each other?

Jackson: They will meet here.

Kelly: We created several systems, and then we moved Watson to the cloud. Also, originally Watson was one service, it’s now 26 services you can access on the cloud. We also have discovered that the data is so important that there are industry-specific Watson. A healthcare Watson might meet a security Watson.

Audience: There’s good an evil in society. Will you have to build in moral codes? Will you have an override function.

Kelly: We’re studying putting policy engines into Watson, instilling some of our values into the system. We debate because we’re introducing human biases. Would it still be Watson, which is objective and fact based.

Jackson: We’re doing work on establishing trust in machines—not to displace the imposition of human values but to understand how they might be expressed in computer systems.

Kelly: We have seven languages for Watson. It’s learning how we think through the language. That may affect how they think.

Audience: Would a cognitive system be applicable to the criminal justice system.

Kelly: It could bring all facts around a case and case law forward. It won’t be the judge. It would inform the decision maker. Bring the facts forward in an unbiased way.

Moore: Remember, Rolling Stone had to retract an article about the University of Virginia. Imagine a machine being able to draw information from many sources, in real time. It can be part of the editorial decision making—with the editors, the writers. It won’t remove bias but might bring in a broader perspective before something is published.

Kelly: In oncology, the specific chemo treatment is discussed by a tumor review board. They have bias. They don’t have all the information they need. Watson can bring it it. Watson can participate in the discussion and raise issues they haven’t thought of.

Audience: Is bandwidth becoming an issue in collaboration?

Kelly: The interconnect speed is the bottleneck. As a result, more compute is happening where the data is. We’re moving the processing, the Watsons, to the data. We’re sending the insights, not all the data.

Audience: In decision making, in innovation. There are a lot of impediments to progress from bias and power structures, overcoming conventional wisdom. How do you help insights bubble up from the bottom.

Kelly: One of the things we could put in Watson is what we think is “ground truth,” or “human truth.” We could get answers more quickly, but we could get more bias and wrong answers. I think we have to minimize human bias and ground truth, and Watson will be a more powerful force.

Audience: What about individualize cognitive systems who understand you deeply and can advise you? Is that something you’re thinking about?

Kelly: Think about the aging population, and having personalized cognitive systems in your home that assist you but measure your cognitive abilities based on your speech and behavior. We’re working on this. It’s a very interesting individualized us of the technology.

Jackson: One can imaging having a system with facial recognition linked to a data base, that might tell you if a certain individual is an unsavory character. It would tell you to stay away.

The multimodal work we’re doing here plays to this. We’re using different sensory modes. This can help the systems respond to us individually.

——

John Kelly, Shirley Jackson, and Hui Su, director of the CISL, perform a virtual ribbon cutting on stage. Huge scissors, huge ribbon, huge screen behind them. Lots of virtual confetti on the screen.

——-

john-kelly-bio1-200x20010:00 AM
John Kelly, Senior Vice President, Solutions Portfolio & Research, IBM

I remember when the Watson project got started. It was called Project Bluejay. in 2007, a group of researchers wanted to take on the problem of deep questioning and answering.

I said don’t you understand we’re in a winter of artificial intelligence.

They told me there were going to build a system, call it Watson, and put it to the test on national television.

I said I don’t think so.

But they want on to create and succeed with the first-ever cognitive system, winning with Watson on Jeopardy.

The team didn’t set out to create a form of artificial intelligence to replicate what humans can do. It was about brining insights to humans, not replace them. This was a big thought.

I think we’re addressing the problem—the price of not knowing. Every industry is being swamped by information. With cognitive technologies we can unlock tremendous new insights and create new value.

You see it in the Internet of Things, security and in healthcare.

Incredible amounts of information, but most goes into storage.

Think about the cost of not knowing in Paris.

So much data likes dark, lies dormant. We don’t use it. Our machines don’t understand it. Think about the value we can draw from it.

Every industry is being swamped by data. Most isn’t being used. It’s lost or put in deep storage. The cost of not using that information is of the charts.

The original Watson was the beginning of the cognitive era. We don’t reprogram it. We feed better data. It learns over time based on a feedback loop.

The system was great at natural language. Now we have give Watson the ability to see images.

Now, what’s going on between the human and the system. We humans have some great skills. Machines have their own set o capabilities, not just for math. It’s the combination of human and machine that is the magic. Human plus machine will beat human or machine at almost every task.

When humans hit a roadblock and Watson comes in, we make program. When Watson hits a roadblock, the humans come in and help push forward.

Cognitive systems learn at scale,reason with purpose and interact with humans in a natural way.

Let’s rethink how we do things.

Let’s rethink the call center. Don’t replace the people. Put Watson in the call center to help.

Rethink sports. We’re working with companies that put sensors in helmets and can detect whether the athlete has been injured.

Rethink insurance. It’s a complicated field. Today with a large insurer, if you go on line and apply for auto insurance, Watson will help you through the process. It knows the rules and it knows you.

In oil and gas, there’s a huge cost of not knowing were to drill. Watson can help.

Rethink medical imaging. Watson can help radiologists view thousands of images and improve diagnosis and hopefully save lives.

Rethink genomic medicine. We hope to use Watson to make sense of the massive amount of genomic information—to spot individual mutations and individual cancers—to find better treatments.

In education, we have a chance to finally change the way we educate our children and adults in the world environment. We can understand how every individual is learning and adjust the teaching to the way the learn best. It’s individualized education.

Interacting with humans at scale will be a game changer. We started working on this soon after the original Watson competed on TV. They were small experimental labs. Then we expanded to rooms with large screens and walls for visualizing data and interacting with Watson.

We were onto something, but we needed a bigger platform where people were working on immersive technologies and we could work with them at scale. That’s why we tied up with Rensselaer. We had to be here in the EMPAC.

I think the next stage of our partnership, cognitive computing at scale, and I think it will change the world.

——

jackson_shirley-240x30009:30 AM
Shirley Ann Jackson, President, Rensselaer Polytechnic Institute

Four elements are required to change the world: Inspired people, programs, platforms and partnership.

Rensselaer’s Curtis R. Priem Experimental Media and Performing Arts Center is a great platform. Our partnership with IBM dates back decades to the IBM System/360. IBM Senior Vice President John Kelly, who got his PhD at Rensselaer, is an inspired person who has been our partner for decades as well.

We were the first and so far the only university in this country to get a physical Watson system on our campus.

At Rensselaer, we’re taking on major challenges for society, such as personalizing medicine, improving education, and addressing extreme events, such as climate change and terrorism.

Now, with IBM, we’re embarking on another major initiative, the Cognitive and Immersive Systems Lab (CISL). We’ll initially focus on situations rooms such as a cognitive board room, a cognitive design studio, cognitive classroom, and the cognitive medical diagnostics room. These are collaborative environments.

We recognize the great power of cognitive computing to ingest vast amounts of diverse data types.

What makes CISL so ambitious. We’re not just looking at a single cognitive agent assisting a single agent. Instead we’ll enhance group cognition and group decision making. The problems of our day are too complex and too interconnected to be involved by individuals. That’s why we have to enable collaborations.

We anticipate a hierarchy of cognitive agents to assist every individual person, as well as an agent to assist the people as a group.

Cognitive agents will have to be sophisticated enough to be able to understand what’s being said and what’s not being said.

In the rooms we’re enhancing human cognitive in multi modal spaces that take in visual and verbal cues and movements, and present information to people through multi sensory pathways.

The rooms will recognize people by their faces and follow them as they move around the room.

The rooms should be able to follow the flow of the meeting, picking up the mood, biases, power, and even detect when a decision has been made.

We’ll benefit from cognitive computing, high performance computing, and neuromorphic computing.

We’ll be able to glean insights from social networks. We’ll be able to model trust, which is essential in collaboration.

We’ve had breakthroughs in immersive environments, which will come to play. We’ve made advances in panoramic screen technology here at EMPAC. To support discussions among small groups we’ve developed Campfire, a virtual fire pit, where information is displayed and conversation is encouraged.

CISL will bring together humans and machines so machines can handle enormous amounts of data and can enhance the gifts that belong uniquely to humans–courage, creativity, insight and a desire to make a better world.

 

 

 

More stories

Meet the Newest IBM Fellows

Since the first class of IBM Fellows in 1962, IBM has honored its top scientists, engineers and programmers, who are chosen for this distinction by the CEO. Among the best and brightest of IBM’s global workforce are 12 new IBM Fellows who join 293 of their peers who have been so recognized over the last […]

Continue reading

Accelerating Digital Transformation with DataOps

Across an array of use cases, AI pioneers are employing a core set of new AI capabilities to unlock the value of data in new ways. According to the 2019 IBM Global C-suite study, leaders are using data 154% more to identify unmet customer needs, enter new markets, and develop new business models. These leaders […]

Continue reading

How IBM is Advancing AI Once Again & Why it Matters to Your Business

There have been several seminal moments in the recent history of AI. In the mid-1990s, IBM created the Deep Blue system that played and beat world chess champion, Garry Kasparov in a live tournament. In 2011, we unveiled Watson, a natural language question and answering system, and put it on the hit television quiz show, […]

Continue reading