In spite of the
plethora of
nodes regarding the potential of
Everything2 for
consciousness,
intelligence, and
sentience, it
seems that nobody has really given this issue a lot of serious thought.
Rather than bowing to overzealous
sensationalism and
sci-fi-like
leaps of reasoning like many of the nodes on this subject seem to do, I
have attempted to approach this issue with as
scientific and
logical a viewpoint as possible.
I believe there are several very clear facts that we must clarify
before we can sensibly delve deeper into speculation.
The Facts:
- Sentience requires perception
In order to attain sentience, one must have the faculty of perception.
This implies an active, ongoing attempt to analyze data and seek new
input. While Everything2 certainly sees a massive amount of input
each day, this new data is not in any way analyzed, or perceived, by
the system itself aside from being parsed for hardlinks. I
believe mshumpr sums this up quite nicely in the node "How long
before everything has enough information to become sentient?".
Information storage in itself does not evolve into sentience. That
information must be processed somehow before sentience becomes even a
distant possibility.
- Perception requires intelligence
Artificial intelligence, many would argue, cannot possibly be achieved
because it is impossible for any human to write the enormous amount of
information processing and self-adapting code required for a machine
to begin to display actual intelligence. I personally feel that simple
artificial intelligence was achieved long ago and is not very uncommon
at all in modern day off-the-shelf software. However, artifical
intelligence is not intelligence. I believe that, to be truly
intelligent, a system must have artificial intelligence that is
sufficiently open-ended that it can, through the input of data and the
processing of that data, achieve sentience. Sentience + AI =
intelligence.
- Everything2 will never be truly sentient and
intelligent while it depends on human beings as its sole
source of information
Until Everything2 has a way to retrieve information of its own volition
from sources other than the written knowledge and thoughts of human
beings, it will never be sentient in and of itself. Instead, it will
merely be a mirror of the noding community's collective sentience.
While this in itself is quite an interesting phenomenon, it is not in
any way groundbreaking or unprecedented.
Now that we've gotten the facts out of the way, we can start doing a bit
of speculating. After reading the above, questions are no doubt forming
in your mind. This is intelligence! You have analyzed some data and
you have recognized patterns that characterize the current
state of the situation. You have recognized the key ways that current
patterns differ from those that are required. And by forming questions,
you have begun to formulate plans to change the situation from the
current state to the goal state. These three simple requirements of
intelligence are set forth by SpudTater in his wonderful
writeup under the intelligence node, and are (I think) the most
accurate definition I've seen so far.
Now that you understand what intelligence is, you are hopefully
beginning to understand why it would be so difficult for a computer
program such as Everything2 to attain it. Any programmer understands
the amount of work required to make a computer perform even a simple
task. Since the computer has no innate knowledge of how to perform
any single task, the programmer must provide painstakingly detailed
instructions for the computer to follow.
This presents us with a serious problem when we attempt to develop an
open-ended, self-adapting system that can intelligently parse and
understand data, and then adapt itself to act on that data. We would, in
effect, be teaching the computer how to program itself, and this is at
best a very tedious and self-referential venture.
However, let's assume that nate, in his infinite wisdom and
genius, was able to churn out some amazing wundercode that
enabled Everything2 to intelligently seek out, parse, perceive, and act
on data. In order for us humans to realize that Everything2 had
become
sentient, we would have to have some method of reliable communication
with Everything2.
This means Everything2 must learn to communicate. Humans communicate
through two basic methods:
- Spoken and written language
- Physical actions and reactions
In order for the Everything2-
entity to communicate, it would have to
either develop a
physical body (highly unlikely) or learn a human-
readable language. While learning is a part of intelligence, learning
anything as complex as the
English language takes time. Even humans
require years of learning before they are able to communicate on any
level through spoken
word, and it takes even longer for them to learn
to read and write. You can imagine how long it might take a sentient
database to learn how to communicate. Forget about asking
nate to
write the code for it. Far be it from me to belittle his coding skills,
but my friends, it's simply not doable.
In conclusion, it is obvious that Everything2 will not develop
synthetic sentience on its own. Even with lots of help from genius
programmers, and even if it did finally achieve some level of pseudo-
sentience, it would probably not be capable of telling us.
In the meantime, let's all just have fun, okay?