Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Jul
24th
Tue
permalink

Dirk Helbing on A New Kind Of Socio-inspired Technology

The big unexplored continent in science is actually social science, so we really need to understand much better the principles that make our society work well, and socially interactive systems. Our future information society will be characterized by computers that behave like humans in many respects. In ten years from now, we will have computers as powerful as our brain, and that will really fundamentally change society. Many professional jobs will be done much better by computers. How will that change society? How will that change business? What impacts does that have for science, actually?

There are two big global trends. One is big data. That means in the next ten years we’ll produce as many data, or even more data than in the past 1,000 years. The other trend is hyperconnectivity. That means we have networking our world going on at a rapid pace; we’re creating an Internet of things. So everyone is talking to everyone else, and everything becomes interdependent. What are the implications of that? (…)

But on the other hand, it turns out that we are, at the same time, creating highways for disaster spreading. We see many extreme events, we see problems such as the flash crash, or also the financial crisis. That is related to the fact that we have interconnected everything. In some sense, we have created unstable systems. We can show that many of the global trends that we are seeing at the moment, like increasing connectivity, increase in the speed, increase in complexity, are very good in the beginning, but (and this is kind of surprising) there is a turning point and that turning point can turn into a tipping point that makes the systems shift in an unknown way.

It requires two things to understand our systems, which is social science and complexity science; social science because computers of tomorrow are basically creating artificial social systems. Just take financial trading today, it’s done by the most powerful computers. These computers are creating a view of the environment; in this case the financial world. They’re making projections into the future. They’re communicating with each other. They have really many features of humans. And that basically establishes an artificial society, which means also we may have all the problems that we are facing in society if we don’t design these systems well. The flash crash is just one of those examples that shows that, if many of those components — the computers in this case — interact with each other, then some surprising effects can happen. And in that case, $600 billion were actually evaporating within 20 minutes.

Of course, the markets recovered, but in some sense, as many solid stocks turned into penny stocks within minutes, it also changed the ownership structure of companies within just a few minutes. That is really a completely new dimension happening when we are building on these fully automated systems, and those social systems can show a breakdown of coordination, tragedies of the commons, crime or cyber war, all these kinds of things will happen if we don’t design them right.

We really need to understand those systems, not just their components. It’s not good enough to have wonderful gadgets like smartphones and computers; each of them working fine in separation. Their interaction is creating a completely new world, and it is very important to recognize that it’s not just a gradual change of our world; there is a sudden transition in the behavior of those systems, as the coupling strength exceeds a certain threshold.

A traffic flow in a circle

I’d like to demonstrate that for a system that you can easily imagine: traffic flow in a circle. Now, if the density is high enough, then the following will happen: after some time, although every driver is trying hard to go at a reasonable speed, cars will be stopped by a so-called ‘phantom traffic jam.’ That means smooth traffic flow will break down, no matter how hard the drivers will try to maintain speed. The question is, why is this happening? If you would ask drivers, they would say, “hey, there was a stupid driver in front of me who didn’t know how to drive!” Everybody would say that. But it turns out it’s a systemic instability that is creating this problem.

That means a small variation in the speed is amplified over time, and the next driver has to brake a little bit harder in order to compensate for a delayed reaction. That creates a chain reaction among drivers, which finally stops traffic flow. These kinds of cascading effects are all over the place in the network systems that we have created, like power grids, for example, or our financial markets. It’s not always as harmless as in traffic jams. We’re just losing time in traffic jams, so people could say, okay, it’s not a very serious problem. But if you think about crowds, for example, we have this transition towards a large density of the crowd, then what will happen is a crowd disaster. That means people will die, although nobody wants to harm anybody else. Things will just go out of control. Even though there might be hundreds or thousands of policemen or security forces trying to prevent these things from happening.

This is really a surprising behavior of these kinds of strongly-networked systems. The question is, what implication does that have for other network systems that we have created, such as the financial system? There is evidence that the fact that now every bank is interconnected with every other bank has destabilized the system. That means that there is a systemic instability in place that makes it so hard to control, or even impossible to control. We see that the big players, and also regulators, have large difficulties to get control of these systems.  

That tells us something that we need to change our perspective regarding these systems. Those complex systems are not characterized anymore by the properties of their components. But they’re characterized by what is the outcome of the interactions between those components. As a result of those interactions, self-organization is going on in these systems. New emergent properties come up. They can be very surprising, actually, and that means we cannot understand those systems anymore, based on what we see, which is the components.

We need to have new instruments and tools to understand these kinds of systems. Our intuition will not work here. And that is what we want to create: we want to come up with a new information platform for everybody that is bringing together big data with exa-scale computing, with people, and with crowd sourcing, basically connecting the intelligence of the brains of the world.

One component that is going to measure the state of the world is called the Planetary Nervous System. That will measure not just the physical state of the world and the environmental situation, but it is also very important actually that we learn how to measure social capital, such as trust and solidarity and punctuality and these kinds of things, because this is actually very important for economic value generation, but also for social well-being.

Those properties as social capital, like trust, they result from social network interactions. We’ve seen that one of the biggest problems of the financial crisis was this evaporation of trust. It has burned tens of thousands of billion dollars. If we would learn how to stabilize trust, or build trust, that would be worth a lot of money, really. Today, however, we’re not considering the value of social capital. It can happen that we destroyed it or that we exploit it, such as we’ve exploited and destroyed our environment. If we learn how much is the value of social capital, we will start to protect it. Also we’ll take it into account in our insurance policies. Because today, no insurance is taking into account the value of social capital. It’s the material damage that we take into account, but not the social capital. That means, in some sense, we’re underinsured. We’re taking bigger risks than we should.

This is something that we want to learn, how to quantify the fundaments of society, to quantify the social footprint. It means to quantify the implications of our decisions and actions.

The second component, the Living Earth Simulator will be very important here, because that will look at what-if scenarios. It will take those big data generated by the Planetary Nervous System and allow us to look at different scenarios, to explore the various options that we have, and the potential side effects or cascading effects, and unexpected behaviors, because those interdependencies make our global systems really hard to understand. In many cases, we just overlook what would happen if we fix a problem over here: It might have unwanted side effects; in many cases, that is happening in other parts of our world.

We are using supercomputers today in all areas of our development. Like if we are developing a car, a plane or medical tracks or so, supercomputers are being used, also in the financial world. But we don’t have a kind of political or a business flight simulator that helps us to explore different opportunities. I think this is what we can create as our understanding of society progresses. We now have much better ideas of how social coordination comes about, what are the preconditions for cooperation. What are conditions that create conflict, or crime, or war, or epidemicspreading, in the good and the bad sense?

We’re using, of course, viral marketing today in order to increase the success of our products. But at the same time, also we are suffering from a quick spreading of emerging diseases, or of computer viruses, and Trojan horses, and so on. We need to understand these kinds of phenomena, and with the data and the computer power that is coming up, it becomes within reach to actually get a much better picture of these things.

The third component will be the Global Participatory Platform [pdf]. That basically makes those other tools available for everybody: for business leaders, for political decision-makers, and for citizens. We want to create an open data and modeling platform that creates a new information ecosystem that allows you to create new businesses, to come up with large-scale cooperation much more easily, and to lower the barriers for social, political and economic participation.

So these are the three big elements. We’ll furthermore  build exploratories of society, of the economy and environment and technology, in order to be able to anticipate possible crises, but also to see opportunities that are coming up. Those exploratories will bring these three elements together. That means the measurement component, the computer simulation component, and the participation, the interactiveness.

In some sense, we’re going to create virtual worlds that may look like our real world, copies of our world that allow us to explore polices in advance or certain kinds of planning in advance. Just to make it a little bit more concrete, we could, for example, check out a new airport or a new city quarter before it’s being built. Today we have these architectural plans, and competitions, and then the most beautiful design will have win. But then, in practice, it can happen that it doesn’t work so well. People have to stand in line in queues, or are obstructing each other. Many things may not work out as the architect imagined that.                 

What if we populated basically these architectural plans with real people? They could check it out, live there for some months and see how much they like it. Maybe even change the design. That means, the people that would use these facilities and would live in these new quarters of the city could actually participate in the design of the city. In the same sense, you can scale that up. Just imagine Google Earth or Google Street View filled with people, and have something like a serious kind of Second Life. Then we could have not just one history; we can check out many possible futures by actually trying out different financial architectures, or different decision rules, or different intellectual property rights and see what happens.                 

We could have even different virtual planets, with different laws and different cultures and different kinds of societies. And you could choose the planet that you like most. So in some sense, now a new age is opening up with almost unlimited resources. We’re, of course, still living in a material world, in which we have a lot of restrictions, because resources are limited. They’re scarce and there’s a lot of competition for these scarce resources. But information can be multiplied as much as you like. Of course, there is some cost, and also some energy needed for that, but it’s relatively low cost, actually. So we can create really almost infinite new possibilities for creativity, for productivity, for interaction. And it is extremely interesting that we have a completely new world coming up here, absolutely new opportunities that need to be checked out.

But now the question is: how will it all work? Or how would you make it work? Because the information systems that we have created are even more complex than our financial system. We know the financial system is extremely difficult to regulate and to control. How would you want to control an information system of this complexity? I think that cannot be done top-down. We are seeing now a trend that complex systems are run in a more and more decentralized way. We’re learning somehow to use self-organization principles in order to run these kinds of systems. We have seen that in the Internet, we are seeing t for smart grids, but also for traffic control.

I have been working myself on these new ways of self-control. It’s very interesting. Classically one has tried to optimize traffic flow. It’s so demanding that even our fastest supercomputers can’t do that in a strict sense, in real time. That means one needs to make simplifications. But in principle, what one is trying to do is to impose an optimal traffic light control top-down on the city. The supercomputer believes to know what is best for all the cars, and that is imposed on the system.                 

We have developed a different approach where we said: given that there is a large degree of variability in the system, the most important aspect is to have a flexible adaptation to the actual traffic conditions. We came up with a system where traffic flows control the traffic lights. It turns out this makes much better use of scarce resources, such as space and time. It works better for cars, it works better for public transport and for pedestrians and bikers, and it’s good for the environment as well.                 

The age of social innovation

There’s a new kind of socio-inspired technology coming up, now. Society has many wonderful self-organization mechanisms that we can learn from, such as trust, reputation, culture. If we can learn how to implement that in our technological system, that is worth a lot of money; billions of dollars, actually. We think this is the next step after bio-inspired technology.

The next big step is to focus on society. We’ve had an age of physics; we’re now in an age of biology. I think we are entering the age of social innovation as we learn to make sense of this even bigger complexity of society. It’s like a new continent to discover. It’s really fascinating what now becomes understandable with the availability of Big Data about human activity patterns, and it will open a door to a new future.

What will be very important in order to make sense of the complexity of our information society is to overcome the disciplinary silos of science; to think out of the box. Classically we had social sciences, we had economics, we had physics and biology and ecology, and computer science and so on. Now, our project is trying to bring those different fields together, because we’re deeply convinced that without this integration of different scientific perspectives, we cannot anymore make sense of these hyper-connected systems that we have created.                 

For example, computer science requires complexity science and social science to understand those systems that have been created and will be created. Why is this? Because the dense networking and to the complex interaction between the components creates self-organization, and emergent phenomena in those systems. The flash crash is just one example that shows that unexpected things can happen. We know that from many systems.

Complexity theory is very important here, but also social science. And why is that? Because the components of these information communication systems are becoming more and more human-like. They’re communicating with each other. They’re making a picture of the outside world. They’re projecting expectations into the future, and they are taking autonomous decisions. That means if those computers interact with each other, it’s creating an artificial social system in some sense.                 

In the same way, social science will need complexity science and computer science. Social science needs the data that computer science and information communication technology can provide. Now, and even more in the future, those data traces about human activities allow us eventually to detect patterns and kind of laws of human behavior. It will be only possible through the collaboration with computer science to get those data, and to make sense of what is happening actually in society. I don’t need to mention that obviously there are complex dynamics going on in society; that means complexity science is needed for social science as well.

In the same sense, we could say complexity science needs social science and computer science to become practical. To go a step beyond talking about butterfly effects and chaos and turbulence. And to make sure that the thinking of complexity science will pervade our thinking in the natural engineering and social sciences and allow us to understand the real problems of our world. That is kind of the essence: that we need to bring these different scientific fields together. We have actually succeeded to build up these integrated communities in many countries all over the world, ready to go, as soon as money becomes available for that.        

Big Data is not a solution per se. Even the most powerful machine learning algorithm will not be sufficient to make sense of our world, to understand the principles according to which our world is working. This is important to recognize. The great challenge is to marry data with theories, with models. Only then will we be able to make sense of the useful bits of data. It’s like finding a needle in the haystack. The more data you have, the more difficult it may be to find this needle, actually, to a certain extent. And there is this danger of over-fitting, of being distracted from important details. We are certainly already in an age where we’re flooded with information, and our attention span can actually not process all that information. That means there is a danger that this undermines our wisdom, if our attention is attracted by the wrong details of information. So we are confronted with the problem of finding the right institutions and tools and instruments for decision-making.        

The Living Earth Simulator will basically take the data that is gathered by the Internet, search requests, and created by sensor networks, and feed it into big computer simulations that are based on models of social and economic and technological behavior. In this way, we’ll be able to look at what-if scenarios. We hope to get a better understanding, for example, of financial systems and some answers to controversial questions such as how much leverage effect is good? Under what conditions is ‘naked short-selling’ beneficial? When does it destabilize markets? To what extent is high frequency trading good, or it can it also have side effects? All these kinds of questions, which are difficult to answer. Or how to deal best with the situation in Europe, where we have trouble, obviously, in Greece, but also kind of contagious effects on other countries and on the rest of the financial system. It would be very good to have the models and the data that allow us actually to simulate these kinds of scenarios and to take better-informed decisions. (…)

The idea is to have an open platform to create a data and model commons that everybody can contribute to, so people could upload data and models, and others could use that. People would also judge the quality of the data and models and rate them according to their criteria. And we also point out the criteria according to which they’re doing the rating. But in principle, everybody can contribute and everybody can use it. (…)                            

We have much better theories, also, that allows us to make sense of those data. We’re entering into an age where we can understand society and the economy much better, namely as complex self-organizing systems.           

It will be important to guide us into the future because we are creating very powerful systems. Information society will transform our society fundamentally and we shouldn’t just let it happen. We want to understand how that will change our society, and what are the different pathes that our society may take, and decide for the one that we want it to take. For that, we need to have a much better understanding.

Now a lot of social activity data are becoming available through Facebook and Twitter and Google search requests and so on. This is, of course, a huge opportunity for business. Businesses are talking about the new oil, personal data as new asset class. There’s something like a gold rush going on. That also, of course, has huge opportunities for science, eventually we can make sense of complex systems such as our society. There are different perspectives on this. They range from some people who think that information communication technologies eventually will create a God’s-eye view: systems that make sense of all human activities, and the interactions of people, while others are afraid of a Big Brother emerging.                 

The question is how to handle that situation. Some people say we don’t need privacy in society. Society is undergoing a transformation, and privacy is not anymore needed. I don’t actually share this point of view, as a social scientist, because public and private are two sides of the same coin, so they cannot exist without the other side. It is very important, for a society to work, to have social diversity. Today, we know to appreciate biodiversity, and in the same way we need to think about social diversity, because it’s a motor of innovation. It’s also an important factor for societal resilience. The question now is how all those data that we are creating, and also recommender system and personalized services are going to impact people’s decision-making behavior, and society overall.                 

This is what we need to look at now. How is people’s behavior changing through these kinds of data? How are people changing their behavior when they feel they’re being observed? Europe is quite sensitive about privacy. The project we are working on is actually trying to find a balance between the interest of companies and Big Data of governments and individuals. Basically we want to develop technologies that allow us to find this balance, to make sure that all the three perspectives actually are taken into account. That you can make big business, but also at the same time, the individual’s privacy is respected. That individuals have more control over their own data, know what is happening with them, have influence on what is happening with them. (…)           

In some sense, we want to create a new data and model commons, a new kind of language, a new public good that allows people to do new things. (…)

My feeling is that actually business will be made on top of this sea of data that’s being created. At the moment data is kind of the valuable resource, right? But in the future, it will probably be a cheap resource, or even a free resource to a certain extent, if we learn how to deal with openness of data. The expensive thing will be what we do with the data. That means the algorithms, the models, and theories that allow us to make sense of the data.”

Dirk Helbing, physicist, and professor of sociology at ETH Zurich – Swiss Federal Institute of Technology, in particular for modelling and simulation, A New Kind Of Socio-inspired Technology, Edge Conversation, June 19, 2012. (Illustration: WSF)

See also:

☞ Dirk Helbing, New science and technology to understand and manage our complex world in a more sustainable and resilient way (pdf) (presentation), ETH Zurich
Why does nature so consistently organize itself into hierarchies? Living Cells Show How to Fix the Financial System
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Networks tag on Lapidarium notes

Apr
26th
Thu
permalink

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

   

"That’s what we do with all of our art. A beautiful cathedral, a beautiful painting, a beautiful song—-all of those are ecstatic visions held in stasis; in some sense the artist is saying “here is a glimpse I had of something ephemeral and fleeting and magical, and I’m doing my best to instantiate that into stone, into paint, into stasis.” And that’s what human beings have always done, we try to capture these experiences before they go dim, we try to make sure that what we glimpse doesn’t fade away before we get hungry or sleepy later. (…)

We want to transcend our biological limitations. We don’t want biology or entropy to interrupt the ecstasy of consciousness. Consciousness, when it’s unburdened by the body, is something that’s ecstatic; we use the mind to watch the mind, and that’s the meta-nature of our consciousness, we know that we know that we know, and that’s such a delicious feeling, but when it’s unburdened by biology and entropy it becomes more than delicious; it becomes magical. I mean, think of the unburdening of the ego that takes place when we watch a film; we sit in a dark room, it’s sort of a modern church, we turn out the lights and an illumination beams out from behind us creating these ecstatic visions. We lose ourselves in the story, we experience a genuine catharsis, the virtual becomes real—-it’s total transcendence, right? (…)

This haunting idea of the passing of time, of the slipping away of the treasured moments of our lives, became a catalyst for my thinking a lot about mortality. This sense that the moment is going to end, the night will be over, and that we’re all on this moving walkway headed towards death; I wanted a diversion from that reality. In Ernest Becker's book The Denial of Death, he talks about how the neurotic human condition is not a product of our sexual repression, but rather our repression in the face of death anxiety. We have this urgent knot in our stomach because we’re keenly aware that we’re mortal, and so we try to find these diversions so that we don’t think about it—-and these have manifested into the religious impulse, the romantic impulse, and the creative impulse.

As we increasingly become sophisticated, cosmopolitan people, the religious impulse is less relevant. The romantic impulse has served us well, particularly in popular culture, because that’s the impulse that allows us to turn our lovers into deities; we say things like “she’s like salvation, she’s like the wind,” and we end up worshipping our lovers. We invest in this notion that to be loved by someone is to be saved by someone. But ultimately no relationship can bear the burden of godhood; our lovers reveal their clay feet and their frailties and they come back down to the world of biology and entropy. 

So then we look for salvation in the creative impulse, this drive to create transcendent art, or to participate in aesthetic arrest. We make beautiful architecture, or beautiful films that transport us to this lair where we’re like gods outside of time. But it’s still temporal. The arts do achieve that effect, I think, and so do technologies to the extent that they’re extensions of the human mind, extensions of our human longing. In a way, that is the first pathway to being immortal gods. Particularly with technologies like the space shuttle, which make us into gods in the sense that they let us hover over the earth looking down on it. But then we’re not gods, because we still age and we die.

But even if you see the singularity only as a metaphor, you have to admit it’s a pretty wonderful metaphor, because human nature, if nothing else, consists of this desire to transcend our boundaries—-the entire history of man from hunter gatherer to technologist to astronaut is this story of expanding and transcending our boundaries using our tools. And so whether the metaphor works for you or not, that’s a wonderful way to live your life, to wake up every day and say, “even if I am going to die I am going to transcend my human limitations.” And then if you make it literal, if you drop this pretense that it’s a metaphor, you notice that we actually have doubled our lifespan, we really have improved the quality of life across the world, we really have created magical devices that allow us to send our thoughts across space at nearly the speed of light. We really are on the cusp of reprogramming our biology like we program computers. 

All of the sudden this metaphor of the singularity spills over into the realm of the possible, and it makes it that much more intoxicating; it’s like going from two dimensions to three dimensions, or black and white to color. It just keeps going and going, and it never seems to hit the wall that other ideas hit, where you have to stop and say to yourself “stop dreaming.” Here you can just kind of keep dreaming, you can keep making these extrapolations of Moore’s Law, and say “yeah, we went from building-sized supercomputers to the iPhone, and in forty-five years it will be the size of a blood cell.” That’s happening, and there’s no reason to think it’s going to stop.

Q: Going through your videos, I noticed that one vision of the singularity that you keep returning to is this idea of “substrate-independent minds.” Can you explain what a substrate independent mind is, and why it makes for such a compelling vision of the future?

Jason Silva: That has to do with what’s called STEM compression, which is this notion that all technologies become compressed in terms of space, time, energy and matter (STEM) as they evolve. Our brain is a great example of this; it’s got this dizzying level of complexity for such a small space, but the brain isn’t optimal. The optimal scenario would be to have brain-level complexity, or even higher-level complexity in something that’s the size of cell. If we radically upgrade our bodies with biotech, we might find that in addition to augmenting our biological capabilities, we’re also going to be replacing more of our biology with non-biological components, so that things are backed up and decentralized and not subject to entropy. More and more of the data processing that makes up our consciousness is going to be non-biological, and eventually we might be able to discard biology altogether, because we’ll have finally invented a computational substrate that supports the human mind. 

At that point, if we’re doing computing at the nano scale, or the femto scale, which is even smaller, you could see extraordinary things. What if we could store all of the computing capacity of the world’s computer networks in something that operates at the femto scale? What if we could have thinking, dreaming, conscious minds operating at the femto scale? That would be a substrate independent mind.

You can even go beyond that. John Smart has this really interesting idea he calls the Transcension Hypothesis. It’s this idea that that all civilizations hit a technological singularity, after which they stop expanding outwards, and instead become subject to STEM compression that pushes them inward into denser and denser computational states until eventually we disappear out of the visible universe, and we enter into a black-hole-like condition. So you’ve got digital minds exponentially more powerful than the ones we use today, operating in the computational substrate, at the femto scale, and they’re compressing further and further into a black hole state, because a black hole is the most efficient computational substrate that physics has ever described. I’m not a physicist, but I have read physicists who say that black holes are the ultimate computers, and that’s why the whole STEM compression idea is so interesting, especially with substrate independent minds; minds that can hop back and forth between different organizational structures of matter.  (…)

With technology, we’ve been doing the same thing we used to with religion, which is to dream of a better way to exist, but technology actually gives you real ways to extend your thoughts and your vision. (…)



The mind is always participating in these feedback loops with the spaces it resides in; whatever is around us is a mirror that we’re holding up to ourselves, because everything we’re thinking about we’re creating a model of in our heads. So when you’re in constrained spaces you’re having constrained thoughts, and when you’re in vast spaces you have vast thoughts. So when you get to sit and contemplate actual outer space, solar systems, and galaxies, and super clusters—-think about how much that expands your inner world. That’s why we get off on space. 

I also get off on synthetic biology, because I love the metaphors that exist between technology and biology: the idea that we may be able to reprogram the operating system, or upgrade the software of our biology. It’s a great way to help people understand what’s possible with biology, because people already understand the power we have over the digital world—-we’re like gods in cyberspace, we can make anything come into being. When the software of biology is subject to that very same power, we’re going to be able to do those same things in the realm of living things. There’s this Freeman Dyson line that I have quoted a million times in my videos, to the point where people are actually calling me out about it, but the reason I keep coming back to it is that it’s so emblematic of my awe in thinking about this stuff—-he says that "in the future, a new generation of artists will be writing genomes as fluently as Blake and Byron wrote verses." It’s a really well placed analogy, because the alphabet is a technology; you can use it to engender alphabetic rapture with literature and poetry. Guys like Shakespeare and Blake and Byron were technologists who used the alphabet to engineer wonderful things in the world. With biology, new generations of artists will be able to perform the same miracles that Shakespeare and those guys did with words, only they’ll be doing it with genes.

Q: You romanticize technology in some really interesting ways; in one of your videos you say that if you could watch the last century in time lapse you would see ideas spilling out of the human mind and into the physical universe. Do you expect that interface between the mind and the physical to become even more lubricated as time passes? Or are there limits, physical or otherwise, that we’re eventually going to run up against?

Jason Silva: It’s hard to say, because as our tools become more powerful they shrink the buffer time between our dreams and our creations. Today we still have this huge lag time between thinking and creation. We think of something, and then we have to go get the stuff for it, and then we have to build it—-it’s not like we can render it at the speed of thought. But eventually it will get to the point where it will be like that scene in Inception where he says that we can create and perceive our world at the same time. Because, again, if you look at human progress in time lapse, it is like that scene in Inception. People thought “airplane, aviation, jet engine” and then those things were in the world. If you look at the assembly line of an airplane in time lapse it actually looks self-organizing; you don’t see all of these agencies building it, instead it’s just being formed. And when you see the earth as the biosphere, as this huge integrated system, then you see this stuff just forming over time, just popping into existence. There’s this process of intention, imagination and instantiation, and the buffer time between each of those steps is getting smaller and smaller. (…)”

Jason Silva, Venezuelan-American television personality, filmmaker, gonzo journalist and founding producer/host for Current TV, A Timothy Leary for the Viral Video Age, The Atlantic, Apr 12, 2012.

Turning Into Gods - ‘Concept Teaser’ by Jason Silva

"Turning Into Gods is a new feature length documentary exploring mankind’s journey to ‘play jazz with the universe’… it is a story of our ultimate potential, the reach of our intelligence, the scope of our scientific and engineering abilities and the transcendent quality of our heroic and noble calling.

Thinking, feeling, striving, man is whatPierre Teilhard de Chardin called “the ascending arrow of the great biological synthesis.”… today we walk a tight-rope between ape and Nietzsche’s Overman… how will we make it through, and what is the texture and color of our next refined and designed evolutionary leap? (…)

"We’re on the cusp of a bio-tech/nanotech/artificial-intelligence revolution that will open up new worlds of exploration. And we should open our minds to the limitless, mind-boggling possibilities.”

Why We Could All Use a Heavy Dose of Techno-optimism, Vanity Fair, May 7, 2010.

See also:

‘To understand is to perceive patterns’, Lapidarium notes
Wildcat and Jason Silva on immortality
☞ Jason Silva, The beginning of infinity (video)
Kevin Kelly on information, evolution and technology: ‘The essence of life is not energy but ideas’, Lapidarium notes
Kevin Kelly on Why the Impossible Happens More Often
Waking Life ☞ animated film focuses on the nature of dreams, consciousness, and existentialism. Eamonn Healy speaks about telescopic evolution and the future of humanity
Mark Changizi on Humans, Version 3.0.
Science historian George Dyson: Unravelling the digital code
Technology tag on Lapidarium notes

Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Oct
24th
Mon
permalink

Kevin Kelly on information, evolution and technology: ‘The essence of life is not energy but ideas’

                   

"Technology’s dominance ultimately stems not from its birth in human minds but from its origin in the same self-organization that brought galaxies, planets, life, and minds into existence. It is part of a great asymmetrical arc that begins at the big bang and extends into ever more abstract and immaterial forms over time. The arc is the slow yet irreversible liberation from the ancient imperative of matter and energy.”

Kevin Kelly, What Technology Wants, New York: Viking, The Penguin Group, 2010

"The best way to understand the manufactured world is not to see it as a work of human imagination only, but to see it as an extension of the biological world. Most of us walk around with a strict mental dichotomy between the natural world of genes and the artificial world of concrete and code. When we actually look at how evolution works, the distinction begins to break down. The defining force behind life is not energy but information. Evolution is a process of information transmission, and so is technology, which is why it too reflects a biological transcendence.

Q: You have described technology as the “seventh kingdom of life” – which is a very ontological description – and as “the accumulation of ideas” – which is an epistemological description. Are the two converging?

Kelly: I take a very computational view of life and evolution. If you look at the origins of life and the forces of evolution, they arevery intangible. Life is built on bits, on ideas, on information, on immaterial things. The technology sphere we have made – which is what I call the Technium – consists of information as well. We can take a number of atoms and arrange them in such a way as to maximize their usefulness – for example by creating a cell phone. When we think about who we are, we are always talking about information, about knowledge, about processes that increase the complexity of things. (…)

I am a critic of those who say that the internet has become a sentient and living being. But while the internet is not conscious like an organism, it exhibits some lifelike qualities. Life is not a binary thing that is either there or not there. It is a continuum between semi-living things like viruses and very living things like us. What we are seeing right now is an increased “lifeness” in technology as we move across the continuum. As things become more complex, they become more lifelike. (…)

One of the problems for biologists right now is to distinguish between random and organized processes. If we want to think coherently about the relationship between biology and technology, we need good working definitions to outline the edges of the spectrum of life that we are investigating. One of the ways to do that is to create artificial life and then debate whether we have crossed a threshold. I think we are beginning to see actual evolution in technology because the similarities to natural evolution are so large that it has become hard to ignore them. (…)

I think that the essence of life is natural and subject to the investigation by reason. Quantum physics is science, but it is so far removed from our normal experience that the investigation becomes increasingly difficult. Not everyone might understand it, but collectively we can. One of the reasons we want to build artificial intelligence is to supplement our human intelligence, because we may require other kinds of thinking to understand these mysteries Technology is a way to manufacture types of thinking that don’t yet exist. (…)

Innovation always has unintended consequences. Every new invention creates new solutions, but it also creates almost as many new problems. I tend to think that technology is not really powerful unless it can be powerfully abused. The internet is a great example of that: It will be abused, there will be very significant negative consequences. Even the expansion of choices itself has unintended consequences. Barry Schwartz calls it the “paradox of choice”: Humans have evolved with a limited capacity for making decisions. We can be paralyzed by choice! (…)

Most of the problems today have been generated by technology, and most future problems will be generated by technology as well. I am so technocentric that I say: The solution to technological problems is more technology. Here’s a tangible example: If I throw around some really bad ideas in this interview, you won’t counsel me to stop thinking. You will encourage me to think more and come up with better ideas. Technology is a way of thinking. The proper response to bad technology is not less, but more and better technology. (…)

I always think of technology as a child: You have to work with it, you have to find the right role and keep it away from bad influences. If you tell your child, “I will disown you if you become a lawyer”, that will almost guarantee that they become a lawyer. Every technology can be weaponized. But the way to stop that is not prohibition but an embrace of that technology to steer its future development. (…)

I am not a utopian who believes that technology will solve our problems. I am a protopian, I believe in gradual progress. And I am convinced that much of that progress is happening outside of our control. In nature, new species fill niches that can be occupied and inhabited. And sometimes, these niches are created by previous developments. We are not really in control of those processes. The same is true for innovation: There is an innate bias in the Technium that makes certain processes inevitable. (…)

I use the term the same way you would describe adolescence as the inevitable step between childhood and adulthood. We are destined by the physics and chemistry of matter. If we looked at a hundred planets in the universe that were inhabited by intelligent life, I bet that we would eventually see something like the internet on almost all of them. But can we find exceptions? Probably. (…)

Q: Is innovation a process that can continue indefinitely? Or does the infinite possibility space eventually run against the constraints of a world with finite resources and finite energy?

Kelly: I don’t believe in omega points. One of the remarkable things about life is that evolution does not stop. It always finds new paths forwards and new niches to occupy. As I said before, the essence of life is not energy but ideas. If there are limits to how many ideas can exist within a brain or within a system, we are still very far away from those limits. (…)

Long before we reach a saturation point, we will evolve into something else. We invented our humanity, and we can reinvent ourselves with genetic engineering or other innovations. We might even fork into a species that embraces speedy development and a species that wants no genetic engineering.

Q: You are advocating a very proactive approach to issues like genetic enhancements and human-technological forms of symbiosis, yet you also stress the great potential for abuse, for ethical problems and for unintended consequences.

Kelly: Yes, we are steamrolling ahead. The net gain will slightly outweigh the negative aspects. That is all we need: A slightly greater range of choices and opportunities every year equals progress. (…)

For the past ten thousand years, technological progress has on average enabled our opportunities to expand. The easiest way to demonstrate the positive arc of progress is to look at the number of people today who would want to live in an earlier time. Any of us could sell all material possessions within days and live like a caveman. I have written on the Amish people, and I have lived with native tribes, so I understand the attractions of that lifestyle. It’s a very supportive and grounded reality. But the cost of that experience is the surrender of all the other choices and opportunities we now enjoy. (…)

My point about technology is that every person has a different set of talents and abilities. The purpose of technology is to provide us with tools to maximize our talents and explore our opportunities. The challenge is to make use of the tools that fit us. Your technology can be different from my technology because our talents and interests are different. If you look at the collective, you might think that we are all becoming more alike. But when you go down to the individual level, technology has the potential to really bring out the differences that make us special. Innovation enables individualization. (…)

Q: Is the internet increasing our imaginative or innovative potential?

Kelly: That is a good point. A lot of these impossibilities happen within collective or globalist structures. We can do things that were completely impossible during the industrial age because we can now transcend our individual experience. (…)

Q: The industrial age made large-scale production possible, now we see large-scale collaboration. What is the next step?

Kelly: I love that question. What is the next stage? I think we are decades or centuries away from a global intelligence, but that would be another phase of human development. If you could generate thoughts on a planetary scale, if we moved towards singularity, that would be huge.

Q: The European: The speed of change leaves room for optimism.

Kelly: My optimism is off the chart. I got it from Asia, where I saw how quickly civilizations could move from abject poverty to incredible wealth. If they can do it, almost anything is possible. Let me go back to the original quote about seeing God in a cell phone: The reason we should be optimistic is life itself. It keeps bouncing back even when we do horrible things to it. Life is brimming with possibilities, details, intelligence, marvels, ingenuity. And the Technium is very much an extension of that possibility space.”

Kevin Kelly, writer, photographer, conservationist, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, "My Optimism Is Off The Chart", The European Magazine, 20.09.2011 (Illustration: Seashells from Okinawa by Thomas Schmall)

See also:

Kevin Kelly on Technology, or the Evolution of Evolution
Kevin Kelly on Why the Impossible Happens More Often
Kevin Kelly on the Satisfaction Paradox
Technology tag on Lapidarium
Technology tag on Lapidarium notes

Sep
2nd
Fri
permalink

Kevin Kelly on Why the Impossible Happens More Often

     
                                                   Noosphere by Tatiana Plakhova

"Everyone "knew" that people don’t work for free, and if they did, they could not make something useful without a boss. But today entire sections of our economy run on software instruments created by volunteers working without pay or bosses. Everyone knew humans were innately private beings, yet the impossibility of total open round-the-clock sharing still occurred. Everyone knew that humans are basically lazy, and they would rather watch than create, and they would never get off their sofas to create their own TV. It would be impossible that millions of amateurs would produce billions of hours of video, or that anyone would watch any of it. Like Wikipedia, or Linux, YouTube is theoretically impossible. But here this impossibility is real in practice. (…)

As far as I can tell the impossible things that happen now are in every case manifestations of a new, bigger level of organization. They are the result of large-scale collaboration, or immense collections of information, or global structures, or gigantic real-time social interactions. Just as a tissue is a new, bigger level of organization for a bunch of individual cells, these new social structures are a new bigger level for individual humans. And in both cases the new level breeds emergence. New behaviors emerge from the new level that were impossible at the lower level. Tissue can do things that cells can’t. The collectivist organizations of wikipedia, Linux, the web can do things that industrialized humans could not. (…)

The cooperation and coordination breed by irrigation and agriculture produced yet more impossible behaviors of anticipation and preparation, and sensitivity to the future. Human society unleashed all kinds of previously impossible human behaviors into the biosphere.

The technium is accelerating the creation of new impossibilities by continuing to invent new social organizations. (…)

When we are woven together into a global real-time society, the impossibilities will really start to erupt. It is not necessary that we invent some kind of autonomous global consciousness. It is only necessary that we connect everyone to everyone else. Hundreds of miracles that seem impossible today will be possible with this shared human awareness. (…)

In large groups the laws of statistics take over and our brains have not evolved to do statistics. The amount of data tracked is inhuman; the magnitudes of giga, peta, and exa don’t really mean anything to us; it’s the vocabulary of machines. Collectively we behave differently than individuals. Much more importantly, as individuals we behave differently in collectives. (…)

We are swept up in a tectonic shift toward large, fast, social organizations connecting us in novel ways. There may be a million different ways to connect a billion people, and each way will reveal something new about us. Something hidden previously. Others have named this emergence the Noosphere, or MetaMan, or Hive Mind. We don’t have a good name for it yet. (…)

I’ve used the example of the bee before. One could exhaustively study a honey bee for centuries and never see in the lone individual any of the behavior of a bee hive. It is just not there, and can not emerge until there are a mass of bees. A single bee lives 6 weeks, so a memory of several years is impossible, but that’s how long a hive of individual bees can remember. Humanity is migrating towards its hive mind. Most of what “everybody knows” about us is based on the human individual. Collectively, connected humans will be capable of things we cannot imagine right now. These future phenomenon will rightly seem impossible. What’s coming is so unimaginable that the impossibility of wikipedia will recede into outright obviousness.

Connected, in real time, in multiple dimensions, at an increasingly global scale, in matters large and small, with our permission, we will operate at a new level, and we won’t cease surprising ourselves with impossible achievements.”

Kevin Kelly, writer, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, Why the Impossible Happens More Often, The Technium, 26 August 2011

Aug
7th
Sun
permalink

The Optimism Bias and Memory

“The belief that the future will be much better than the past and present is known as the optimism bias. (…)

The bias also protects and inspires us: it keeps us moving forward rather than to the nearest high-rise ledge. Without optimism, our ancestors might never have ventured far from their tribes and we might all be cave dwellers, still huddled together and dreaming of light and heat.

To make progress, we need to be able to imagine alternative realities — better ones — and we need to believe that we can achieve them. (…)

A growing body of scientific evidence points to the conclusion that optimism may be hardwired by evolution into the human brain. (…)

Our brains aren’t just stamped by the past. They are constantly being shaped by the future. (…)

Scientists who study memory proposed an intriguing answer: memories are susceptible to inaccuracies partly because the neural system responsible for remembering episodes from our past might not have evolved for memory alone. Rather, the core function of the memory system could in fact be to imagine the future (…) The system is not designed to perfectly replay past events. (…) It is designed to flexibly construct future scenarios in our minds. As a result, memory also ends up being a reconstructive process, and occasionally, details are deleted and others inserted.”

Tali Sharot, a British Academy postdoctoral fellow at the Wellcome Trust Centre for Neuroimaging at University College London, Optimism Bias: Human Brain May Be Hardwired for Hope, Time, June 6, 2011

Remembering the past to imagine the future

"A rapidly growing number of recent studies show that imagining the future depends on much of the same neural machinery that is needed for remembering the past. These findings have led to the concept of the prospective brain; an idea that a crucial function of the brain is to use stored information to imagine, simulate and predict possible future events. We suggest that processes such as memory can be productively re-conceptualized in light of this idea. (…)

Thoughts of past and future events are proposed to draw on similar information stored in episodic memory and rely on similar underlying processes, and episodic memory is proposed to support the construction of future events by extracting and recombining stored information into a simulation of a novel event. The hypothesis receives general support from findings of neural and cognitive overlap between thoughts of past and future events. (…)



Future events were more vivid and more detailed when imagined in recently experienced contexts (university locations) than when imagined in remotely experienced contexts (school settings). These results support the idea that episodic information is used to construct future event simulations. (…)

The core brain system is also used by many diverse types of task that require mental simulation of alternative perspectives. The idea is that the core brain system allows one to shift from perceiving the immediate environment to an alternative, imagined perspective that is based largely on memories of the past. Future thinking, by this view, is just one of several forms of such ability. Thinking about the perspectives of others (theory of mind) also appears to use the core brain system, as do certain forms of navigation. (…)

From an adaptive perspective, preparing for the future is a vital task in any domain of cognition or behaviour that is important for survival. The processes of event simulation probably have a key role in helping individuals plan for the future, although they are also important for other tasks that relate to the present and the past.

Memory can be thought of as a tool used by the prospective brain to generate simulations of possible future events.”

— D. L. Schacter, D. Rose Addis & R. L. Buckner, Remembering the past to imagine the future: the prospective brain (pdf), Department of Psychology, Harvard University, and the Athinoula A Martinos Center for Biomedical Imaging, Massachusetts General Hospital

See also:
The Brain Memories Are Crucial for Looking Into the Future
How the brain stops time, Lapidarium

                 

☞ K. K. Szpunar and K. B. McDermott, Episodic future thought and its relation to remembering: Evidence from ratings of subjective experience, Department of Psychology, Washington University
Memory tag on Lapidarium notes

May
30th
Mon
permalink

Luciano Floridi on the future development of the information society

                                   

In information societies, the threshold between online and offline will soon disappear, and that once there won’t be any difference, we shall become not cyborgs but rather inforgs, i.e. connected informational organisms. (…)

Infosphere is a neologism I coined years ago on the basis of “biosphere”, a term referring to that limited region on our planet that supports life. It denotes the whole informational environment constituted by all informational entities (thus including informational agents as well), their properties, interactions, processes and mutual relations. It is an environment comparable to, but different from cyberspace (which is only one of its sub-regions, as it were), since it also includes off-line and analogue spaces of information. We shall see that it is also an environment (and hence a concept) that is rapidly evolving. (…)

Re-ontologizing is another neologism that I have recently introduced in order to refer to a very radical form of re-engineering, one that not only designs, constructs or structures a system (e.g. a company, a machine or some artefact) anew, but that fundamentally transforms its intrinsic nature. In this sense, for example, nanotechnologies and biotechnologies are not merely re-engineering but actually re-ontologizing our world. (…)

Nowadays, we are used to considering the space of information as something we log-in to and log-out from. Our view of the world (our metaphysics) is still modern or Newtonian: it is made of “dead” cars, buildings, furniture, clothes, which are non-interactive, irresponsive and incapable of communicating, learning, or memorizing. But what we still experience as the world offline is bound to become a fully interactive and responsive environment of wireless, pervasive, distributed, a2a (anything to anything) information processes, that works a4a (anywhere for anytime), in real time. This will first gently invite us to understand the world as something “alive” (artificially live). Such animation of the world will, paradoxically, make our outlook closer to that of pre-technological cultures which interpreted all aspects of nature as inhabited by teleological forces.

The second step will be a reconceptualization of our ontology in informational terms. It will become normal to consider the world as part of the infosphere, not so much in the dystopian sense expressed by a Matrix-like scenario, where the “real reality” is still as hard as the metal of the machines that inhabit it; but in the evolutionary, hybrid sense represented by an environment such as New Port City, the fictional, post-cybernetic metropolis of Ghost in the Shell.

The infosphere will not be a virtual environment supported by a genuinely “material” world behind; rather, it will be the world itself that will be increasingly interpreted and understood informationally, as part of the infosphere. At the end of this shift, the infosphere will have moved from being a way to refer to the space of information to being synonymous with Being. This is the sort of informational metaphysics I suspect we shall find increasingly easy to embrace. (…)

We have all known that this was possible on paper for some time; the difference is that it is now actually happening in our kitchen. (…)

As a consequence of such re-ontologization of our ordinary environment, we shall be living in an infosphere that will become increasingly synchronized (time), delocalised (space) and correlated (interactions). Previous revolutions (especially the agricultural and the industrial ones) created macroscopic transformation in our social structures and architectural environments, often without much foresight.

The informational revolution is no less dramatic. We shall be in serious trouble, if we do not take seriously the fact that we are constructing the new environment that will be inhabited by future generations. We should be working on an ecology of the infosphere, if we wish to avoid problems such as a tragedy of the digital commons. Unfortunately, I suspect it will take some time and a whole new kind of education and sensitivity to realise that the infosphere is a common space, which needs to be preserved to the advantage of all.

One thing seems indubitable though: the digital divide will become a chasm, generating new forms of discrimination between those who can be denizens of the infosphere and those who cannot, between insiders and outsiders, between information rich and information poor. It will redesign the map of worldwide society, generating or widening generational, geographic, socio-economic and cultural divides. But the gap will not be reducible to the distance between industrialized and developing countries, since it will cut across societies.

The evolution of inforgs

We have seen that we are probably the last generation to experience a clear difference between onlife and online. The third transformation that I wish to highlight concerns precisely the emergence of artificial and hybrid (multi) agents, i.e., partly artificial and partly human (consider, for example, a family as a single agent, equipped with digital cameras, laptops, palm pilots, iPods, mobiles, wireless network, digital TVs, DVDs, CD players, etc.).

These new agents already share the same ontology with their environment and can
operate in it with much more freedom and control. We (shall) delegate or outsource to artificial agents memories, decisions, routine tasks and other activities in ways that will be increasingly integrated with us and with our understanding of what it means to be an agent. (…)

Our understanding of ourselves as agents will also be deeply affected. I am not referring here to the sci-fi vision of a “cyborged”2 humanity. Walking around with
something like a Bluetooth wireless headset implanted in your ear does not seem the best way forward, not least because it contradicts the social message it is also meant to be sending: being always on call is a form of slavery, and anyone so busy and important should have a PA instead. The truth is rather that being a sort of cyborg is not what people will embrace, but what they will try to avoid, unless it is inevitable (more on this shortly). (…)

We are all becoming connected informational organisms (inforgs). This is happening not through some fanciful transformation in our body, but, more seriously and realistically, through the re-ontologization of our environment and of ourselves. (…)

The informational nature of agents should not be confused with a “data shadow” either. The more radical change, brought about by the re-ontologization of the infosphere, will be the disclosure of human agents as interconnected, informational organisms among other informational organisms and agents. (…)

We are witnessing an epochal, unprecedented migration of humanity from its Umwelt [the outer world, or reality, as it affects the agent inhabiting it] to the infosphere itself, not least because the latter is absorbing the former. As a result, humans will be inforgs among other (possibly artificial) inforgs and agents operating in an environment that is friendlier to digital creatures. As digital immigrants like us are replaced by digital natives like our children, the latter will come to appreciate that there is no ontological difference between infosphere and Umwelt, only a difference of levels of abstractions. And when the migration is complete, we shall increasingly feel deprived, excluded, handicapped or poor to the point of paralysis and psychological trauma whenever we are disconnected from the infosphere, like fish out of water.

One day, being an inforg will be so natural that any disruption in our normal flow of information will make us sick. Even literally. A simple illustration is provided by current BAN (Body Area Network) – systems “a base technology for permanent monitoring and logging of vital signs […] [to supervise] the health status of patients suffering from chronic diseases, such as Diabetes and Asthma.” (…)

One important problem that we shall face will concern the availability of sufficient energy to stay connected to the infosphere non-stop. It is what Intel calls the battery life challenge[pdf] (…) Today, we know that our autonomy is limited by the energy bottleneck of our batteries. (…)

In the US, the average age of players is increasing, as the children of the post-computer revolutions are reaching their late thirties. (…) By the time they retire, in three or four decades, they will be living in the infosphere full-time. (…)

If you spend more time connected than sleeping, you are an inforg. (…)”

Luciano Floridi, MPhil. and PhD, MA University of Oxford, currently holds the Research Chair in philosophy of information and the UNESCO Chair in Information and Computer Ethics, both at the University of Hertfordshire, Department of Philosophy, The future development of the information society (pdf), University of Hertfordshire. (Illustration source)

See also:

Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
Luciano Floridi on Philosophy of Information (set of videos)
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Cyberspace tag on Lapidarium

May
29th
Sun
permalink

Anthropocene: “the recent age of man”. Mapping Human Influence on Planet Earth

     

"Humans have a tendency to fall prey to the illusion that their economy is at the very center of the universe, forgetting that the biosphere is what ultimately sustains all systems, both man-made and natural. In this sense, ‘environmental issues’ are not about saving the planet—it will always survive and evolve with new combinations of atom—but about the prosperous development of our own species.”

Carl Folke is the science director of the Stockholm Resilience Centre at Stockholm University, Starting Over, SEED, Aprill 22, 2011.

Science is recognising humans as a geological force to be reckoned with.

"The here and now are defined by astronomy and geology. Astronomy takes care of the here: a planet orbiting a yellow star embedded in one of the spiral arms of the Milky Way, a galaxy that is itself part of the Virgo supercluster, one of millions of similarly vast entities dotted through the sky. Geology deals with the now: the 10,000-year-old Holocene epoch, a peculiarly stable and clement part of the Quaternary period, a time distinguished by regular shifts into and out of ice ages. The Quaternary forms part of the 65m-year Cenozoic era, distinguished by the opening of the North Atlantic, the rise of the Himalayas, and the widespread presence of mammals and flowering plants. This era in turn marks the most recent part of the Phanerozoic aeon, the 540m-year chunk of the Earth’s history wherein rocks with fossils of complex organisms can be found. The regularity of celestial clockwork and the solid probity of rock give these co-ordinates a reassuring constancy.

                                               (Click to enlarge)

Now there is a movement afoot to change humanity’s co-ordinates. In 2000 Paul Crutzen, an eminent atmospheric chemist, realised he no longer believed he was living in the Holocene. He was living in some other age, one shaped primarily by people. From their trawlers scraping the floors of the seas to their dams impounding sediment by the gigatonne, from their stripping of forests to their irrigation of farms, from their mile-deep mines to their melting of glaciers, humans were bringing about an age of planetary change. With a colleague, Eugene Stoermer, Dr Crutzen suggested this age be called the Anthropocene—“the recent age of man”. (…)

The term “paradigm shift” is bandied around with promiscuous ease. But for the natural sciences to make human activity central to its conception of the world, rather than a distraction, would mark such a shift for real. For centuries, science has progressed by making people peripheral. In the 16th century Nicolaus Copernicus moved the Earth from its privileged position at the centre of the universe. In the 18th James Hutton opened up depths of geological time that dwarf the narrow now. In the 19th Charles Darwin fitted humans onto a single twig of the evolving tree of life. As Simon Lewis, an ecologist at the University of Leeds, points out, embracing the Anthropocene as an idea means reversing this trend. It means treating humans not as insignificant observers of the natural world but as central to its workings, elemental in their force.

Sous la plage, les pavés;

The most common way of distinguishing periods of geological time is by means of the fossils they contain. On this basis picking out the Anthropocene in the rocks of days to come will be pretty easy. Cities will make particularly distinctive fossils. A city on a fast-sinking river delta (and fast-sinking deltas, undermined by the pumping of groundwater and starved of sediment by dams upstream, are common Anthropocene environments) could spend millions of years buried and still, when eventually uncovered, reveal through its crushed structures and weird mixtures of materials that it is unlike anything else in the geological record.

The fossils of living creatures will be distinctive, too. Geologists define periods through assemblages of fossil life reliably found together. One of the characteristic markers of the Anthropocene will be the widespread remains of organisms that humans use, or that have adapted to life in a human-dominated world. According to studies by Erle Ellis, an ecologist at the University of Maryland, Baltimore County, the vast majority of ecosystems on the planet now reflect the presence of people. There are, for instance, more trees on farms than in wild forests. And these anthropogenic biomes are spread about the planet in a way that the ecological arrangements of the prehuman world were not. The fossil record of the Anthropocene will thus show a planetary ecosystem homogenised through domestication.

More sinisterly, there are the fossils that will not be found. Although it is not yet inevitable, scientists warn that if current trends of habitat loss continue, exacerbated by the effects of climate change, there could be an imminent and dramatic number of extinctions before long.

All these things would show future geologists that humans had been present. But though they might be diagnostic of the time in which humans lived, they would not necessarily show that those humans shaped their time in the way that people pushing the idea of the Anthropocene want to argue. The strong claim of those announcing the recent dawning of the age of man is that humans are not just spreading over the planet, but are changing the way it works.

Such workings are the province of Earth-system science, which sees the planet not just as a set of places, or as the subject of a history, but also as a system of forces, flows and feedbacks that act upon each other. This system can behave in distinctive and counterintuitive ways, including sometimes flipping suddenly from one state to another. To an Earth-system scientist the difference between the Quaternary period (which includes the Holocene) and the Neogene, which came before it, is not just what was living where, or what the sea level was; it is that in the Neogene the climate stayed stable whereas in the Quaternary it swung in and out of a series of ice ages. The Earth worked differently in the two periods.

The clearest evidence for the system working differently in the Anthropocene comes from the recycling systems on which life depends for various crucial elements. In the past couple of centuries people have released quantities of fossil carbon that the planet took hundreds of millions of years to store away. This has given them a commanding role in the planet’s carbon cycle.

Although the natural fluxes of carbon dioxide into and out of the atmosphere are still more than ten times larger than the amount that humans put in every year by burning fossil fuels, the human addition matters disproportionately because it unbalances those natural flows. As Mr Micawber wisely pointed out, a small change in income can, in the absence of a compensating change in outlays, have a disastrous effect. The result of putting more carbon into the atmosphere than can be taken out of it is a warmer climate, a melting Arctic, higher sea levels, improvements in the photosynthetic efficiency of many plants, an intensification of the hydrologic cycle of evaporation and precipitation, and new ocean chemistry.

All of these have knock-on effects both on people and on the processes of the planet. More rain means more weathering of mountains. More efficient photosynthesis means less evaporation from croplands. And the changes in ocean chemistry are the sort of thing that can be expected to have a direct effect on the geological record if carbon levels rise far enough.

At a recent meeting of the Geological Society of London that was devoted to thinking about the Anthropocene and its geological record, Toby Tyrrell of the University of Southampton pointed out that pale carbonate sediments—limestones, chalks and the like—cannot be laid down below what is called a “carbonate compensation depth”. And changes in chemistry brought about by the fossil-fuel carbon now accumulating in the ocean will raise the carbonate compensation depth, rather as a warmer atmosphere raises the snowline on mountains. Some ocean floors which are shallow enough for carbonates to precipitate out as sediment in current conditions will be out of the game when the compensation depth has risen, like ski resorts too low on a warming alp. New carbonates will no longer be laid down. Old ones will dissolve. This change in patterns of deep-ocean sedimentation will result in a curious, dark band of carbonate-free rock—rather like that which is seen in sediments from the Palaeocene-Eocene thermal maximum, an episode of severe greenhouse warming brought on by the release of pent-up carbon 56m years ago.

The fix is in

No Dickensian insights are necessary to appreciate the scale of human intervention in the nitrogen cycle. One crucial part of this cycle—the fixing of pure nitrogen from the atmosphere into useful nitrogen-containing chemicals—depends more or less entirely on living things (lightning helps a bit). And the living things doing most of that work are now people (see chart). By adding industrial clout to the efforts of the microbes that used to do the job single-handed, humans have increased the annual amount of nitrogen fixed on land by more than 150%. Some of this is accidental. Burning fossil fuels tends to oxidise nitrogen at the same time. The majority is done on purpose, mostly to make fertilisers. This has a variety of unwholesome consequences, most importantly the increasing number of coastal “dead zones” caused by algal blooms feeding on fertiliser-rich run-off waters.


                                                              (Click to enlarge)

Industrial nitrogen’s greatest environmental impact, though, is to increase the number of people. Although nitrogen fixation is not just a gift of life—it has been estimated that 100m people were killed by explosives made with industrially fixed nitrogen in the 20th century’s wars—its net effect has been to allow a huge growth in population. About 40% of the nitrogen in the protein that humans eat today got into that food by way of artificial fertiliser. There would be nowhere near as many people doing all sorts of other things to the planet if humans had not sped the nitrogen cycle up.

It is also worth noting that unlike many of humanity’s other effects on the planet, the remaking of the nitrogen cycle was deliberate. In the late 19th century scientists diagnosed a shortage of nitrogen as a planet-wide problem. Knowing that natural processes would not improve the supply, they invented an artificial one, the Haber process, that could make up the difference. It was, says Mark Sutton of the Centre for Ecology and Hydrology in Edinburgh, the first serious human attempt at geoengineering the planet to bring about a desired goal. The scale of its success outstripped the imaginings of its instigators. So did the scale of its unintended consequences.

For many of those promoting the idea of the Anthropocene, further geoengineering may now be in order, this time on the carbon front. Left to themselves, carbon-dioxide levels in the atmosphere are expected to remain high for 1,000 years—more, if emissions continue to go up through this century. It is increasingly common to hear climate scientists arguing that this means things should not be left to themselves—that the goal of the 21st century should be not just to stop the amount of carbon in the atmosphere increasing, but to start actively decreasing it. This might be done in part by growing forests (see article) and enriching soils, but it might also need more high-tech interventions, such as burning newly grown plant matter in power stations and pumping the resulting carbon dioxide into aquifers below the surface, or scrubbing the air with newly contrived chemical-engineering plants, or intervening in ocean chemistry in ways that would increase the sea’s appetite for the air’s carbon. (…)

It is that the further the Earth system gets from the stable conditions of the Holocene, the more likely it is to slip into a whole new state and change itself yet further.

The Earth’s history shows that the planet can indeed tip from one state to another, amplifying the sometimes modest changes which trigger the transition. The nightmare would be a flip to some permanently altered state much further from the Holocene than things are today: a hotter world with much less productive oceans, for example. Such things cannot be ruled out. On the other hand, the invocation of poorly defined tipping points is a well worn rhetorical trick for stirring the fears of people unperturbed by current, relatively modest, changes.

In general, the goal of staying at or returning close to Holocene conditions seems judicious. It remains to be seen if it is practical. The Holocene never supported a civilisation of 10 billion reasonably rich people, as the Anthropocene must seek to do, and there is no proof that such a population can fit into a planetary pot so circumscribed. So it may be that a “good Anthropocene”, stable and productive for humans and other species they rely on, is one in which some aspects of the Earth system’s behaviour are lastingly changed. For example, the Holocene would, without human intervention, have eventually come to an end in a new ice age. Keeping the Anthropocene free of ice ages will probably strike most people as a good idea.

Dreams of a smart planet

That is an extreme example, though. No new ice age is due for some millennia to come. Nevertheless, to see the Anthropocene as a blip that can be minimised, and from which the planet, and its people, can simply revert to the status quo, may be to underestimate the sheer scale of what is going on.

Take energy. At the moment the amount of energy people use is part of what makes the Anthropocene problematic, because of the carbon dioxide given off. That problem will not be solved soon enough to avert significant climate change unless the Earth system is a lot less prone to climate change than most scientists think. But that does not mean it will not be solved at all. And some of the zero-carbon energy systems that solve it—continent- scale electric grids distributing solar energy collected in deserts, perhaps, or advanced nuclear power of some sort—could, in time, be scaled up to provide much more energy than today’s power systems do. As much as 100 clean terawatts, compared to today’s dirty 15TW, is not inconceivable for the 22nd century. That would mean humanity was producing roughly as much useful energy as all the world’s photosynthesis combined.

In a fascinating recent book, “Revolutions that Made the Earth”, Timothy Lenton and Andrew Watson, Earth-system scientists at the universities of Exeter and East Anglia respectively, argue that large changes in the amount of energy available to the biosphere have, in the past, always marked large transitions in the way the world works. They have a particular interest in the jumps in the level of atmospheric oxygen seen about 2.4 billion years ago and 600m years ago. Because oxygen is a particularly good way of getting energy out of organic matter (if it weren’t, there would be no point in breathing) these shifts increased sharply the amount of energy available to the Earth’s living things. That may well be why both of those jumps seem to be associated with subsequent evolutionary leaps—the advent of complex cells, in the first place, and of large animals, in the second. Though the details of those links are hazy, there is no doubt that in their aftermath the rules by which the Earth system operated had changed.

The growing availability of solar or nuclear energy over the coming centuries could mark the greatest new energy resource since the second of those planetary oxidations, 600m years ago—a change in the same class as the greatest the Earth system has ever seen. Dr Lenton (who is also one of the creators of the planetary-boundaries concept) and Dr Watson suggest that energy might be used to change the hydrologic cycle with massive desalination equipment, or to speed up the carbon cycle by drawing down atmospheric carbon dioxide, or to drive new recycling systems devoted to tin and copper and the many other metals as vital to industrial life as carbon and nitrogen are to living tissue. Better to embrace the Anthropocene’s potential as a revolution in the way the Earth system works, they argue, than to try to retreat onto a low-impact path that runs the risk of global immiseration.

Such a choice is possible because of the most fundamental change in Earth history that the Anthropocene marks: the emergence of a form of intelligence that allows new ways of being to be imagined and, through co-operation and innovation, to be achieved. The lessons of science, from Copernicus to Darwin, encourage people to dismiss such special pleading. So do all manner of cultural warnings, from the hubris around which Greek tragedies are built to the lamentation of King David’s preacher: “Vanity of vanities, all is vanity…the Earth abideth for ever…and there is no new thing under the sun.” But the lamentation of vanity can be false modesty. On a planetary scale, intelligence is something genuinely new and powerful. Through the domestication of plants and animals intelligence has remade the living environment. Through industry it has disrupted the key biogeochemical cycles. For good or ill, it will do yet more.

It may seem nonsense to think of the (probably sceptical) intelligence with which you interpret these words as something on a par with plate tectonics or photosynthesis. But dam by dam, mine by mine, farm by farm and city by city it is remaking the Earth before your eyes.”

A man-made world, The Economist, May 26th 2011. (Illustration source)

Anthropocene Cartography - Mapping Human Influence on Planet Earth 


     Western Eurasian Networks | Cities, roads, railways, tranmission lines and submarine cables.

"This is the age of humans.

At least, that’s the argument a number of scientists and scholars are making. They say that the impact of humans on the earth since the early 19th century has been so great, and so irreversible, that it has created a new era similar to the Pleistocene or Holocene. Nobel Prize winner Paul J. Crutzen even proposed the name Anthropocene, and it’s begun to catch on.

Communicating this idea to the public is one of the goals of Globaïa, an educational organization that specializes in creating visuals to explain environmental issues. In a recent project, they mapped population centers, transportation routes and energy transmission lines. (…)

We know that humans have over the centuries become a driving force on our planet. We have been, for the last thousand of years or so, the main geomorphic agent on Earth. It might be hard to believe but, nowadays, human activities shift about ten times as much material on continents’ surface as all geological processes combined. Though our technologies and extensive land-use, we have become a land-shaping force of nature, similar to rivers, rain, wind and glaciers.

Furthermore, over the last 60 years (since the end of WWII), many major human activities have been sharply accelerating in pace and intensity. Not only population trends and atmospheric CO2 but also water use, damming of rivers, deforestation, fertilizer consumption, to name a few. The period is called the “great acceleration” and today’s environmental problems are somehow linked to this rapid global increase of population and consumption and its impacts on the Earth System. (…)

Mapping the extent of our infrastructures and the energy flows of our activities is, I believe, a good starting point to increase awareness of the peculiarities of the present era. I wish these images, along with other tools created by many scientists and NGOs, could contribute to enhance mutual understanding and create collective solutions. For we all share the same tiny, pale blue dot. (…)

Anthropocene Mapping from Globaïa.

Q: Your maps include cities, transportation paths and various transmission lines of both power and information. Why do you feel these are valid ways of examining the impact of humans on the earth?

There are many ways to map our impacts on planet Earth. We can map croplands and pasture lands, as well as anthropogenic biomes (the so-called “anthromes”). My goal was to create something new where we could essentially see the main channels through which human exchanges (transport, energy, resources, information) are occurring. Roads and railways are high-impacts human features for obvious reasons. Pipelines and transmission lines are feeding our global civilization, for better or for worse. Submarine cables are physically linking continents together and contributing to this “age of information.” I could have added telephone lines, satellites, smalls road, mines, dams and so on — but the point was not to create map with overly saturated areas either. (…)

Q: Can you discuss the role of the human in the ecosystem, and its physical footprint on the earth?

I was referring to the Anthroposphere as the human layer of the Earth System. The biosphere is made out of living matter. Together with the atmosphere, the lithosphere (including the asthenosphere) and the hydrosphere (including the cryosphere), this set of concentric spheres is creating the ecosphere — our world, the Earth. It is quite an old world where many dramatic events took place and where billion of innovations happened through evolution. It is a world fed by our mighty Sun. It is a world where humans appeared only recently. Now, indeed, our species and its 7 billion people is still growing inside it, converting ever more wilderness areas into human-influenced landscapes. This world is however finite, unique and fragile. Now is a good time to start thinking of it this way. I believe we are still, in our heads, living in a pre-Copernician world. It’s time to upgrade our worldview.”

— Felix D. Pharand, Mapping the Age of Humans, The Atlantic Cities, Oct 27, 2011

Welcome to the Anthropocene



A 3-minute journey through the last 250 years of our history, from the start of the Industrial Revolution to the Rio+20 Summit. The film charts the growth of humanity into a global force on an equivalent scale to major geological processes. The film was commissioned by the Planet Under Pressure conference, London 26-29 March, a major international conference focusing on solutions. planetunderpressure2012.net.

HOME documentary

                                   
                                        Click the image to see a film

"Internationally renowned photographer Yann Arthus-Bertrand makes his feature directorial debut with this environmentally conscious documentary produced by Luc Besson, and narrated by Glenn Close. Shot in 54 countries and 120 locations over 217 days, Home presents the many wonders of planet Earth from an entirely aerial perspective. As such, we are afforded the unique opportunity to witness our changing environment from an entirely new vantage point.

In our 200,000 years on Earth, humanity has hopelessly upset Mother Nature’s delicate balance. Some experts claim that we have less than ten years to change our patterns of consumption and reverse the trend before the damage is irreversible. Produced to inspire action and encourage thoughtful debate, Home poses the prospect that unless we act quickly, we risk losing the only home we may ever have.”

HOME a film by Yann Arthus-Bertrand, 2009.

See also:

A Cartography of the Anthropocene, Globaïa
The Age of Anthropocene: Should We Worry? - Imagine a world where cognition arises from techno-human networks rather than the Cartesian individual - the Cognocene, The New York Times debate, May 2011
☞ Adelheid Fisher, A Home Before the End of the World
☞ Andrew C. Revkin, Who Made This Mess of Planet Earth?, The New York Times, July 15, 2011
☞ Daniel T. Willingham, Trust Me, I’m a Scientist, Scientific American, May 5, 2011
Living Planet Report, WWF
It Took Earth Ten Million Years to Recover from Greatest Mass Extinction of all time, ScienceDaily, May 27, 2012
Earth tag on Lapidarium notes

Apr
26th
Tue
permalink

The Brain Memories Are Crucial for Looking Into the Future
                                      

You need a base to build the future

The past and future may seem like different worlds, yet the two are intimately intertwined in our minds. In recent studies on mental time travel, neuroscientists found that we use many of the same regions of the brain to remember the past as we do to envision our future lives. In fact, our need for foresight may explain why we can form memories in the first place. They are indeed “a base to build the future.” And together, our senses of past and future may be crucial to our species’ success.

Endel Tulving, a neuroscientist at the University of Toronto, first proposed a link between memory and foresight in 1985. It had occurred to him as he was examining a brain-injured patient. “N.N.,” as the man was known, still had memories of basic facts. He could explain how to make a long-distance call and draw the Statue of Liberty. But he could not recall a single event from his own life. In other words, he had lost his episodic memory. Tulving and his colleagues then discovered that N.N. could not imagine the future. “What will you be doing tomorrow?” Tulving asked him during one interview. After 15 seconds of silence, N.N. smiled faintly. “I don’t know,” he said.

“Do you remember the question?” Tulving asked.

“About what I’ll be doing tomorrow?” N.N. replied.

“Yes. How would you describe your state of mind when you try to think about it?”

N.N. paused for a few more seconds. “Blank, I guess,” he said. The very concept of the future, seemed meaningless to N.N. “It’s like being in a room with nothing there and having a guy tell you to go find a chair,” he explained.

On the basis of his study of N.N., Tulving proposed that projecting ourselves into the future requires the same brain circuitry we use to remember ourselves in the past. Over the past decade, as scientists have begun to use fMRI scanners to probe the activity of the brain, they have found support for his hypothesis. Last year, for example, Tulving and his colleagues had volunteers lie in an fMRI scanner and imagine themselves in the past, present, and future. The researchers saw a number of regions become active in the brains of the volunteers while thinking of the past and future, but not the present. (…)

Stan Klein, a psychologist at the University of California, Santa Barbara, argues that the intertwining of foresight and episodic memory may help explain how this type of memory evolved in the first place. In Klein’s view, episodic memory probably arose in part because it helped individuals make good decisions about what to do next. For instance, it could have guided our ancestors not to visit a local watering hole on moonlit nights because that was when saber-toothed tigers hung out there. (…)

Klein says his results illustrate the decision-making value of memory: When students were actively planning the future, their memories worked best.

The precursor to mental time travel may have evolved in mammals more than 100 million years ago. Scientists can get clues to its origins by studying lab rats. When a rat moves around a space—be it a meadow or a lab maze—it encodes a map in its hippocampus, a structure located near the brain’s core. Neurons there become active at particular spots along the route. When the rat travels that route again, the same “place cells” fire in the same order. (…)

A number of studies suggest that the hippocampus continues to be crucial to our own power of foresight. Damage to the hippocampus can rob people of their foresight, for example, and when people with healthy brains think about their future, the hippocampus is part of the network that becomes active. But our powers of foresight go far beyond a rodent’s. We don’t just picture walking through a forest. We travel forward into a social future as well, in which we can predict how people will react to the things we do.

Scientists cannot say for sure exactly when our ancestors shifted to this more sophisticated kind of time travel. It is possible that the transition started in our primate ancestors, judging from some intriguing stories about our fellow apes. In the 1990s, for example, zookeepers in Sweden spied on a chimpanzee that kept flinging rocks at human visitors. They found that before the zoo opened each day, the chimp collected a pile of rocks, seemingly preparing ammunition for his attacks when the visitors arrived. Did the chimp see itself a few hours into the future and realize it would need a cache of artillery? The only way we could know for sure would be for the chimp to tell us.

The fact that chimpanzees can’t explain themselves may itself be a clue to the nature of time travel. Full-blown language, which evolved only within the past few hundred thousand years, is one of the traits that make us humans different from other species. It is possible that once language evolved in our ancestors, it changed how we traveled through time. We could now tell ourselves stories about our lives and use that material to compose new stories about our future. Perhaps the literary imagination that gave rise to Dickens and Twain and Nabokov is, in fact, a time machine we carry in our head.”

Carl Zimmer is an award-winning biology writer and author, The Brain Memories Are Crucial for Looking Into the Future, DISCOVER Magazine, April 2011

See also:

The Optimism Bias and Memory
Memory tag on Lapidarium notes

Mar
30th
Wed
permalink

Kevin Kelly on the Satisfaction Paradox

“What if you lived in a world where everything around you was just what you wanted? And there was tons of it. How would you make a choice since all of it — 100% — was just what you liked?

What if you lived in a world where every great movie, book, song that was ever produced was at your fingertips as if “for free”, and your filters and friends had weeded out the junk, the trash, and anything that would remotely bore you. The only choices would be the absolute cream of the cream, the things your best friend would recommend. What would you watch or read or listen to next?

What if you lived in a miraculous world where the only works you ever saw were ones you absolutely loved, including the ones that were randomly thrown in? In other words, you could only watch things perfectly matched to you at that moment. But the problem is that in this world there are a thousand times as many works as you have time in your long life to see. How would you choose? Or would you? (…)

The paradox is that not-choosing may not be satisfying!

We may need to make choices in order to be satisfied, even if those choices lead to less than satisfying experiences.
But of course this would be less than optimal satisfaction. Thus, there may be a psychological dilemma or paradox that ultimate satisfaction may ultimately be unsatisfying.

This is the psychological problem of dealing with abundance rather than scarcity. It is not quite the same problem of abundance articulated by the Paradox of Choice, the theory that we find too many choices paralyzing. That if we are given 57 different mustards to choose from at the supermarket, we often leave without choosing any.

The paradox of satisfaction suggests that the tools we employ to increase our satisfaction of choices — filters and recommendations — may be unsatisfying if they diminish the power of our choices. Another way to say this: no system can be absolutely satisfying. (…)

Let’s say that after all is said and done, in the history of the world there are 2,000 theatrical movies, 500 documentaries, 200 TV shows, 100,000 songs, and 10,000 books that I would be crazy about. I don’t have enough time to absorb them all, even if I were a full time fan. But what if our tools could deliver to me only those items to choose from? How would I — or you — choose from those select choices? (…)

I believe that answering this question is what outfits like Amazon will be selling in the future. For the price of a subscription you will subscribe to Amazon and have access to all the books in the world at a set price. (An individual book you want to read will be as if it was free, because it won’t cost you extra.) The same will be true of movies (Netflix), or music (iTunes or Spotify or Rhapsody.) You won’t be purchasing individual works.

Instead you will pay Amazon, or Netflix, or Spotify, or Google for their suggestions of what you should pay attention to next. Amazon won’t be selling books (which are marginally free); they will be selling their recommendations of what to read. You’ll pay the subscription fee in order to get access to their recommendations to the “free” works, which are also available elsewhere. Their recommendations (assuming continual improvements by more collaboration and sharing of highlights, etc.) will be worth more than the individual books. You won’t buy movies; you’ll buy cheap access and pay for personalized recommendations.

The new scarcity is not creative products but satisfaction. And because of the paradox of satisfaction, few people will ever be satisfied.”

Kevin Kelly, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Satsisfaction Paradox, The Technium, March 2011.

Feb
24th
Thu
permalink

Map–territory relation- a brief résumé

         

     René Magritte, The Treachery of Images, “Ceci n’est pas une pipe” (This is not a pipe)

"If words are not things, or maps are not the actual territory, then, obviously, the only possible link between the objective world and the linguistic world is found in structure, and structure alone.
The only usefulness of map or a language depends on the similarity of structure between the empirical world and the map-languages.”

Alfred Korzybski, Science & Sanity: An Introduction to Non-Aristotelian Systems and General Semantics, Institute of GS, 1994, p.61.

"The map–territory relation describes the relationship between an object and a representation of that object, as in the relation between a geographical territory and a map of it. Polish-American scientist and philosopher Alfred Korzybski remarked that “the map is not the territory,” encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. For example, the pain from a stone falling on one’s foot is not the actual stone, it’s one’s perception of the stone; one’s opinion of a politician, favorable or unfavorable, is not that person; and so on. A specific abstraction or reaction does not capture all facets of its source — e.g. the pain in one’s foot does not convey the internal structure of the stone, you don’t know everything that is going on in the life of a politician, etc. — and thus may limit an individual’s understanding and cognitive abilities unless the two are distinguished. Korzybski held that many people do confuse maps with territories—that is, confuse models of reality with reality itself—in this sense. (…)

Gregory Bateson, in “Form, Substance and Difference" from Steps to an Ecology of Mind (1972), elucidates the essential impossibility of knowing what the territory is, as any understanding of it is based on some representation:

"We say the map is different from the territory. But what is the territory? Operationally, somebody went out with a retina or a measuring stick and made representations which were then put on paper. What is on the paper map is a representation of what was in the retinal representation of the man who made the map; and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all. […] Always, the process of representation will filter it out so that the mental world is only maps of maps, ad infinitum."

Neil Gaiman retells the parable in reference to storytelling in Fragile Things:

"One describes a tale best by telling the tale. You see? The way one describes a story, to oneself or the world, is by telling the story. It is a balancing act and it is a dream. The more accurate the map, the more it resembles the territory. The most accurate map possible would be the territory, and thus would be perfectly accurate and perfectly useless. The tale is the map that is the territory."

Korzybski’s dictum “the map is not the territory” is also cited as an underlying principle used in neuro-linguistic programming, where it is used to signify that individual people in fact do not in general have access to absolute knowledge of reality, but in fact only have access to a set of beliefs they have built up over time, about reality. So it is considered important to be aware that people’s beliefs about reality and their awareness of things (the “map”) are not reality itself or everything they could be aware of (“the territory”). The originators of NLP have been explicit that they owe this insight to General Semantics.” — (Wiki)

Erik Evens in The Linguistic Metaphor:

"Korzybski’s General Semantics offered a view that human knowledge is limited by two main factors: the structure of the human nervous system, and the structure of human languages. He maintained that people cannot experience the world directly, but only through their “abstractions” - nonverbal impressions derived from data detected and transmitted by the senses and the nervous system, and verbal indicators derived from language. (…)

Here’s a story about Alfred Korzybski that’s amusing, and worth repeating because it’s illustrative of some of these ideas:  One day, Korzybski was giving a lecture to a group of students, and he suddenly interrupted the lesson in order to retrieve a packet of biscuits, wrapped in white paper, from his briefcase. He muttered that he just had to eat something, and he asked the students on the seats in the front row, if they would also like a biscuit. A few students took a biscuit. “Nice biscuit, don’t you think”, said Korzybski, while he took a second one. The students were chewing vigorously. After a while he tore the white paper from the biscuits, in order to reveal the original packaging. On it was a big picture of a dog’s head and the words “Dog Cookies”. The students looked at the package, and were shocked. Two of them wanted to throw up, put their hands in front of their mouths, and ran out of the lecture hall to the bathroom.

"You see, ladies and gentlemen", Korzybski remarked, "I have just demonstrated that people don’t just eat food, but they also eat words, and that the taste of the former is often outdone by the taste of the latter." It seems his prank aimed to illustrate how some human suffering originates from the confusion or conflation of linguistic representations of reality, and reality itself.

The Belgian surrealist artist René Magritte illustrated the concept of “perception always intercedes between reality and ourselves”   in a number of paintings including a famous work entitled The Treachery of Images, which consists of a drawing of a pipe with the caption, Ceci n’est pas une pipe (“This is not a pipe”).” — (Wiki)

The painting is not a pipe, but rather an image of a pipe, which was Magritte’s point: "The famous pipe. How people reproached me for it! And yet, could you stuff my pipe? No, it’s just a representation, is it not? So if I had written on my picture "This is a pipe," I’d have been lying!" — (Harry Torczyner, Magritte: Ideas and Images. p. 71.)

Alfred Korzybski:

"I use the map-territory relationship because the characteristic are general for all existing forms of representation which include the structure of language.
We observe

1) That a map-language is not the territory-fact, etc.,

2) Map-language covers not all the characteristic of territory-fact,

3) Forms of representation are self-reflexive in the sense that an ideal map would include the map of the map, etc., and in language we can speak about language.

These three premises are child-like in their simplicity, and yet involve a flat denial of the fundamental present, yet very ancient, unrevised, harmful premises. The third premise has been historically entirely neglected except partially in mathematics.
This self-reflexiveness of language, however, is on the botton of most human difficulties in daily life as well as in science. (…)

As we have seen, for maximum predictability, we must have a map-language similar in structure to the territory-facts. The next crucial problem is to investigate empirically whether our present map-language is similar in structure to the territory-facts. We know empirically that “space” and “time” do not exist separately, otherwise they can not be divided, and so the facts are non-elementalistic. We know, on the other hand, that verbally we can separate or split thein into ficticious elements which do not exist as such. In other words, that the structure of the existing language is elementalistic where the facts are non-elementalistic. This goes much farther. Thus, in actual life we can not split “body” and “mind” “emotions” and “intellect”, etc., while verbalistically we can do that quite happily, and speculate uselessly on these split fictions. We conclude that this elementalistic language is not similar in structure to a non-elementalistic world and ourselves.

Let us analyze further. We find that every “chair”, “match”, “house”, “horse”, “man”, etc., is different, while the old language of intensional structures has only verbal definitions for verbal fictions called, say, “man”, “chair”, etc., emphasizing similarities and disregarding differences. By extension we have only actual chair1, chair2, etc., Smith1, Smith2, etc. which are actualities, not verbal fictions and verbal definitions. We conclude that the structure of the old accepted language being elementalistic an dintensional is not similar in structure to the facts of life and ourselves. This is a conclusion reached by inspections of facts of ordinary life and scientific work and also linguistic facts concerning structure of language which have been entirely neglected in the past.

The conclusions we must draw from these obvious observations are startling and extremely far-reaching, involving fundamentally the future of mankind and civilization.

Because the structure of the present language is definitely and empirically not similar in structure to facts of life and ourselves, proper evaluation and so predictability in our human affairs is thouroughly impossible except by accident.

Another more serious consequence of the neuro-linguistic and neuro-semantic chaos is due to the lack of a science of man by which I mean the lack of application of standard scientific methods to the affairs of man. With our present intensional verbalistic attitutes which follow the structure of language, agreement between individuals and groups is in principle impossible. With a change to extensional orientation, strictly connected with the extensionalization of the structure of language, disagreement becomes impossible. (…) We must make a serious analysis of the neuro-linguistic and neuro-semantic factors involved in our present situation and that realization may, perhaps, help us stop the suicide of our world.”

Alfred Korzybski, Collected Writings, 1920-1950, Institute of General Semantics, 1990, p. 275-276.

Heiner Benking:

"We have to be able to talk about the same things with words which are grounded. (…) We need to see terms and concepts in their context. (…) We can construct frames-of-reference as a schemata to visually reference and share diverse but inter-connected positions, focuses, ranges and horizons, in order to develop not only common grounds but a tolerance for alternate ways of seeing our different levels and scopes. By adequate and open conversation, we can create a common ground. In this way every player can discover his own place in the general panorama and understand better what he does and what he could and should do, or not do.” We can use the cybernetic tools to order our data-base.  But he warns that we should not let us stray in a “virtual cyberspace” in a mainly and merely technical sense, with no relevance to real situations. Scales and proportions and their consequences should be duly taken in account in our representation, as we construct a 3 dimensional space/time model.”International Encyclopedia of Systems and Cybernetics

[This note will be gradually expanded…]

See also:

The Relativity of Truth - a brief résumé, Lapidarium
Cognition / relativity tag on Lapidarium
John Shotter on encounters with ‘Other’ - from inner mental representation to dialogical social practices, Lapidarium
Philosophy of perception, Structural differentialRepresentative realism, List of cognitive biases, Emic and etic, Simulacra and Simulation, Social constructionism

Feb
23rd
Wed
permalink

Mark Changizi on Humans, Version 3.0.


The next giant leap in human evolution may not come from new fields like genetic engineering or artificial intelligence, but rather from appreciating our ancient brains.

“Genetic engineering could engender marked changes in us, but it requires a scientific bridge between genotypes—an organism’s genetic blueprints—and phenotypes, which are the organisms themselves and their suite of abilities. A sufficiently sophisticated bridge between these extremes is nowhere in sight.

And machine-enhancement is part of our world even today, manifesting in the smartphones and desktop computers most of us rely on each day. Such devices will continue to further empower us in the future, but serious hardware additions to our brains will not be forthcoming until we figure out how to build human-level artificial intelligences (and meld them to our neurons), something that will require cracking the mind’s deepest mysteries. I have argued that we’re centuries or more away from that. (…)

There is, however, another avenue for human evolution, one mostly unappreciated in both science and fiction. It is this unheralded mechanism that will usher in the next stage of human, giving future people exquisite powers we do not currently possess, powers worthy of natural selection itself. And, importantly, it doesn’t require us to transform into cyborgs or bio-engineered lab rats. It merely relies on our natural bodies and brains functioning as they have for millions of years.

This mystery mechanism of human transformation is neuronal recycling, coined by neuroscientist Stanislas Dehaene, wherein the brain’s innate capabilities are harnessed for altogether novel functions.

This view of the future of humankind is grounded in an appreciation of the biologically innate powers bestowed upon us by hundreds of millions of years of evolution. This deep respect for our powers is sometimes lacking in the sciences, where many are taught to believe that our brains and bodies are taped-together, far-from-optimal kluges. In this view, natural selection is so riddled by accidents and saddled with developmental constraints that the resultant biological hardware and software should be described as a “just good enough” solution rather than as a “fine-tuned machine.”

So it is no wonder that, when many envisage the future, they posit that human invention—whether via genetic engineering or cybernetic AI-related enhancement—will be able to out-do what evolution gave us, and so bootstrap our species to a new level. This rampant overoptimism about the power of human invention is also found among many of those expecting salvation through a technological singularity, and among those who fancy that the Web may some day become smart.

The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances. And our peculiar variety of color vision, with two of our sensory cones having sensitivity to nearly the same part of the spectrum, is not an accidental mutation that merely stuck around, but, rather, appear to function with the signature of hemoglobin physiology in mind, so as to detect the color signals primates display on their faces and rumps.

These and other inborn capabilities we take for granted are not kluges, they’re not “good enough,” and they’re more than merely smart. They’re astronomically brilliant in comparison to anything humans are likely to invent for millennia.

Neuronal recycling exploits this wellspring of potent powers. If one wants to get a human brain to do task Y despite it not having evolved to efficiently carry out task Y, then a key point is not to forcefully twist the brain to do Y. Like all animal brains, human brains are not general-purpose universal learning machines, but, instead, are intricately structured suites of instincts optimized for the environments in which they evolved. To harness our brains, we want to let the brain’s brilliant mechanisms run as intended—i.e., not to be twisted. Rather, the strategy is to twist Y into a shape that the brain does know how to process. (…)

There is a very good reason to be optimistic that the next stage of human will come via the form of adaptive harnessing, rather than direct technological enhancement: It has already happened.

We have already been transformed via harnessing beyond what we once were. We’re already Human 2.0, not the Human 1.0, or Homo sapiens, that natural selection made us. We Human 2.0’s have, among many powers, three that are central to who we take ourselves to be today: writing, speech, and music (the latter perhaps being the pinnacle of the arts). Yet these three capabilities, despite having all the hallmarks of design, were not a result of natural selection, nor were they the result of genetic engineering or cybernetic enhancement to our brains. Instead, and as I argue in both The Vision Revolution and my forthcoming Harnessed, these are powers we acquired by virtue of harnessing, or neuronal recycling.

In this transition from Human 1.0 to 2.0, we didn’t directly do the harnessing. Rather, it was an emergent, evolutionary property of our behavior, our nascent culture, that bent and shaped writing to be right for our visual system, speech just so for our auditory system, and music a match for our auditory and evocative mechanisms.

And culture’s trick? It was to shape these artifacts to look and sound like things from our natural environment, just what our sensory systems evolved to expertly accommodate. There are characteristic sorts of contour conglomerations occurring among opaque objects strewn about in three dimensions (like our natural Earthly habitats), and writing systems have come to employ many of these naturally common conglomerations rather than the naturally uncommon ones. Sounds in nature, in particular among the solid objects that are most responsible for meaningful environmental auditory stimuli, follow signature patterns, and speech also follows these patterns, both in its fundamental phoneme building blocks and in how phonemes combine into morphemes and words. And we humans, when we move and behave, make sounds having a characteristic animalistic signature, something we surely have specialized auditory mechanisms for sensing and processing; music is replete with these characteristic sonic signatures of animal movements, harnessing our auditory mechanisms that evolved for recognizing the actions of other large mobile creatures like ourselves.

Culture’s trick, I have argued in my research, was to harness by mimicking nature. This “nature-harnessing” was the route by which these three kernels of Human 2.0 made their way into Human 1.0 brains never designed for them.

The road to Human 3.0 and beyond will, I believe, be largely due to ever more instances of this kind of harnessing. And although we cannot easily anticipate the new powers we will thereby gain, we should not underestimate the potential magnitude of the possible changes. After all, the change from Human 1.0 to 2.0 is nothing short of universe-rattling: It transformed a clever ape into a world-ruling technological philosopher.

Although the step from Human 1.0 to 2.0 was via cultural selection, not via explicit human designers, does the transformation to Human 3.0 need to be entirely due to a process like cultural evolution, or might we have any hope of purposely guiding our transformation? When considering our future, that’s probably the most relevant question we should be asking ourselves.

I am optimistic that we may be able to explicitly design nature-harnessing technologies in the near future, now that we have begun to break open the nature-harnessing technologies cultural selection has built thus far. One of my reasons for optimism is that nature-harnessing technologies (like writing, speech, and music) must mimic fundamental ecological features in nature, and that is a much easier task for scientists to tackle than emulating the exhorbitantly complex mechanisms of the brain.

And nature-harnessing may be an apt description of emerging technological practices, such as the film industry’s ongoing struggle to better design the 3D experience to tap into the evolved functions of binocular vision, the gaming industry’s attempts to “gameify” certain tasks (exemplified in the work of Jane McGonigal), or the drive within robotics for more emotionally expressive faces (such as the child robot of Minoru Asada).

Admittedly, none of these sound remotely as revolutionary as writing, speech, or music, but it can be difficult to envision what these developments can become once they more perfectly harness our exquisite biological instincts. (Even writing was, for centuries, used mostly for religious and governmental book-keeping purposes—only relatively recently has the impact of the written word expanded to revolutionize the lives of average humans.)

The point is, most science fiction gets all this wrong. While the future may be radically “futuristic,” with our descendants having breathtaking powers we cannot fathom, it probably won’t be because they evolved into something new, or were genetically modified, or had AI-chip enhancements. Those powerful beings will simply be humans, like you and I. But they’ll have been nature-harnessed in ways we cannot anticipate, the magic latent within each of us used for new, brilliant Human 3.0 capabilities.”
Mark Changizi (cognitive scientist, author), Humans, Version 3.0., SEED.com, Feb 23, 2011 See also: Prof. Stanislas Dehaene, "How do humans acquire novel cultural skills? The neuronal recycling model", LSE Institute | Nicod, (Picture source: Rzeczpospolita)
Sep
28th
Tue
permalink

Robert Lanza: Does the Past Exist Yet? Evidence Suggests Your Past Isn’t Set in Stone

(Picture source)

“Recent discoveries require us to rethink our understanding of history. “The histories of the universe,” said renowned physicist Stephen Hawking “depend on what is being measured, contrary to the usual idea that the universe has an objective observer-independent history.”

Is it possible we live and die in a world of illusions? Physics tells us that objects exist in a suspended state until observed, when they collapse in to just one outcome. Paradoxically, whether events happened in the past may not be determined until sometime in your future — and may even depend on actions that you haven’t taken yet.

In 2002, scientists carried out an amazing experiment, which showed that particles of light “photons" knew — in advance −- what their distant twins would do in the future. They tested the communication between pairs of photons — whether to be either a wave or a particle. Researchers stretched the distance one of the photons had to take to reach its detector, so that the other photon would hit its own detector first. The photons taking this path already finished their journeys — they either collapse into a particle or don’t before their twin encounters a scrambling device. Somehow, the particles acted on this information before it happened, and across distances instantaneously as if there was no space or time between them. They decided not to become particles before their twin ever encountered the scrambler. It doesn’t matter how we set up the experiment. Our mind and its knowledge is the only thing that determines how they behave. Experiments consistently confirm these observer-dependent effects.

More recently (Science 315, 966, 2007), scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened. As the photons passed a fork in the apparatus, they had to decide whether to behave like particles or waves when they hit a beam splitter. Later on - well after the photons passed the fork - the experimenter could randomly switch a second beam splitter on and off. It turns out that what the observer decided at that point, determined what the particle actually did at the fork in the past. At that moment, the experimenter chose his history. (…)

But what about dinosaur fossils? Fossils are really no different than anything else in nature. For instance, the carbon atoms in your body are “fossils” created in the heart of exploding supernova stars. Bottom line: reality begins and ends with the observer. “We are participators,” John Wheeler said “in bringing about something of the universe in the distant past.” Before his death, he stated that when observing light from a quasar, we set up a quantum observation on an enormously large scale. It means, he said, the measurements made on the light now, determines the path it took billions of years ago.

Like the light from Wheeler’s quasar, historical events such as who killed JFK, might also depend on events that haven’t occurred yet. There’s enough uncertainty that it could be one person in one set of circumstances, or another person in another. Although JFK was assassinated, you only possess fragments of information about the event. But as you investigate, you collapse more and more reality. According to biocentrism, space and time are relative to the individual observer - we each carry them around like turtles with shells. (…)

History is a biological phenomenon − it’s the logic of what you, the animal observer experiences. You have multiple possible futures, each with a different history like in the Science experiment. Consider the JFK example: say two gunmen shot at JFK, and there was an equal chance one or the other killed him. This would be a situation much like the famous Schrödinger’s cat experiment, in which the cat is both alive and dead − both possibilities exist until you open the box and investigate.

“We must re-think all that we have ever learned about the past, human evolution and the nature of reality, if we are ever to find our true place in the cosmos,” says Constance Hilliard, a historian of science at UNT. Choices you haven’t made yet might determine which of your childhood friends are still alive, or whether your dog got hit by a car yesterday. In fact, you might even collapse realities that determine whether Noah’s Ark sank. “The universe,” said John Haldane, “is not only queerer than we suppose, but queerer than we can suppose.”

See also:

Biocentrism
The Experience and Perception of Time, Stanford Encyclopedia of Philosophy
Time tag on Lapidarium
Sep
27th
Mon
permalink
Hans Reichenbach, The Direction of Time, Courier Dover Publications, 1999, page 9. (via fuckyeahquantummechanics)

Hans Reichenbach, The Direction of Time, Courier Dover Publications, 1999, page 9. (via fuckyeahquantummechanics)