Quote:
I mean, really, the hard part I am having right now (I'm probably going to be doing something like you said, but much less complex and more basic, just to give the appearance of something that looks complex and superior ;p) is getting the thing to understand the concept of itself and of things that are not itself without defining anything : ).
|
Seems that what you're trying to get is some form of artificial consciousness. Personally I'd be a bit less ambitious, but there is a history in this kind of work of talented amateurs making profound leaps because they don't have the training that says we can't do this yet. I know that there was a lot of research in this direction at Los Alamos and Berkeley in the mid to late 90s, sponsored by the US military.
Quote:
Also, teaching it the very fundamentals of the human language based on those 2 concepts and a third (what is me, what isn't me, and what I need), which I believe are the basic concepts every human is born with.
|
The "what I need" bit is the core of all evolutionary computing. You define what that is (typically it's either specific kinds of input or responses to output) and use that as your "happiness function". Whatever increases happiness is something that should be more likely to happen again, and what decreases it should be less likely (this is a massively simplified version - look into sigmoid functions, integrating nodes and transition functions for more detail, there's a fair bit of hairy calculus in the literature). In your case, if the human response correlates well with what the bot says, then it should "become more happy" and learn that that was a good response.
The other 2 can be either very simplistic or just too damn hard. At the simplest level any input that comes from outside is the second, whilst input from back propagation (i.e. output is looped back as input - once again a gross simplification ) is ideas from Self.
And for your biggest question:
Quote:
did it take these memories into account for understanding the meaning of everything?
|
No. The idea was that the expert systems were very good at asking questions and answering them on specific things. For example, an expert system that can simplify medical diagnosis only needed an ontology that covers the knowledge for that purpose. As diagnoses were confirmed or refuted by real human doctors the expert system gets better at asking the right questions and interpreting the human responses. The memories (which are trees of questions and answers with weights) become more refined over time, but the constraints of limited scope means that many things are not remembered as they do not contribute (positively or negatively) to the fitness (happiness) function.