From Simple Patterns to Sentience’s Complexity

“In order for AI to be able to overtake a programmers job, implies that the client knows what he wants. We’re safe…”
This job related predicament rises, as I see it, only in a brief moment in a much broader natural course. It is the kind of meme that strives to survive, it probably deserves a definitive NFT minting as of 03.2022 before it will soon fall into oblivion.
I want to start tackling its survival efficiency by talking about the AI and its power as I see it from this stand point: in which in the past decade the AI’s potential reached only its first steps in its infancy with clear signals to world changing capabilities. This goes further down, through refinements (of its rightly chosen) ontology.
We live in a world (or better said, this is the way the world works) in which almost everything employs from within some mechanics of refinement and adaptation. A machinery that moves things towards escaping chaos – and this with great energy consumption – but energy that partly comes from within (as the “will” opposed to the “death drive”). This goes from the inherent patterns in nature up to the end spectrum of the human activity(1). The underlying battle between order and entropy(2) on the underlying surface of our particular(3) universe with its laws. And above this, as the layer of simple emerging patterns to the ultimate, macro refinement substrate of sentient manifestations, and on top of it with the layer of symbolic, cultures and abstract thinking. A predictable pattern alright, further on rising within the brain energy patterns, a culmination tip, that leads to the creation of synthetic worlds, artificial sentience, transcendental states of being in the digital universe, as the next steps. A macro, ever growing vertical ontology at work.
Within this broader context the refinement produces further deepening of the domains and within AI domain, the current advances allow neural training of some larger than ever data sets, like in the language, vision, with a touch of symbolic, towards incipient meaning. In the context of programming we see first results in training upon some good part of all the human written programming code. And that we are able to put that to work in the business requirements with the programming languages on the real use cases (necessary for a program to have a purpose) for now in the form of AI assisted programming(4).
And further refinement would lead to a more natural way of conceiving programs through language processing of the requirements, from the problems to the actual code generation. And with, again, a further refinement into the symbolic AI with the actual predictable outcome not by only answering the questions, but with solutions offered by AI prior asking the question(5). All that within a domain criteria based on programming/AI ethics, best practices solutions, security, cultural impact, etc.
On the side of symbolic AI at this time there is an upward trend of trying different models of processing, a process in itself that requires further research. At the same time I see that this process is hindered by the fact that the models are still mapping or try mimicking some partial models of the mind, of trying to explain how brain works, and by posing answers to the questions related to consciousness(6).
I am still on the path and researching on my own symbolic model within the essentials, unspoiled concepts advanced through the innovative approach by Ludwig Wittgenstein:

“The reason computers have no understanding of the sentences they process is not that they lack sufficient neuronal complexity, but that they are not, and cannot be, participants in the culture to which the sentences belong. A sentence does not acquire meaning through the correlation, one to one, of its words with objects in the world; it acquires meaning through the use that is made of it in the communal life of human beings.”

There is not only – many would call with yesterday’s standards as a “grim” future – but there is in fact to be reminded that one cannot oppose the refinements because it requires also effort and energy none is possessing enough. Through self cultivated death drive that will only help on the short run…so remember this meme and laugh at its NFT later.

(1) forms of life with language games adaptation, creation activities with continuous refinements of their ontological models, circulating concept cultures.
(2) with simple patterns from which something emerges and with the counter action of opposite forces from nature up to the psyche and symbolic, the death drive.
(3) multiverse theory, in which very briefly explained: the eternal timeless energy waves produces bubbles of universes each with its fundamentals.
(4) copilot software that has the basis all of the github source code.
(5) if we have the right domain question we have the answer, in that the answer is there, it is only that briefly something is obscuring it from view.
(6) on questions related to the knowledge of ourselves, which in fact, are not of scientific nature.

C. Stefan / 24.03.2022

AI Weak and Strong

Capabilities currently classified as weak AI include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), self-driving cars, intelligent routing in content delivery networks, and interpreting complex data. Done through mainly calculus involving prediction problems, extrapolation based on feeding quality data.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.

General intelligence is among the field’s long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience and artificial psychology.

From a deep learning perspective, even neuroevolution approaches has its limits for a strong AI, no matter how much the population of networks grow to attain the objective the acknowledgment of the solution has to have an external “decidant.”

“That is, evolution in this case is not just deciding the architecture and weights, but also the rules that guide how and when particular weights change.” from Neuroevolution: A different kind of deep learning by O.Stanley

In the context of strong AI, I really think that we need a kind of “attractors” framework as a “loosely” decidant role (aka. “conscience”).

“Strong AI is related to the field of consciousness, sentience, mind. We are on the way to general intelligence with every AI field evolving (from natural language processing to cognitive computing, social intelligence, planning, learning, perception etc.) The rules that govern the universe are already there, our mind works under the patterns within the universes pattern. To realize the strong AI we need to hook the machine into this flow of our as-such universe of small-world networks*. Consciousness is not programmable, it is a result of the sensory, when hooked to the flow, it can be seen as a result of the “strange attractors” created into the flow. A strong AI will exist when all the sensory inputs will have a proper cognitive processing, conscience will arise de facto from the space where the inputs exists but under a proactive threshold. The triggering occurs under strange attractors influenza, thus the conscience manifests itself.” (Definition of Strong AI by Essential-Works, C.Stefan 04.2017).

* See: Watts and Strogatz model & Barabási–Albert model.
* Restricted Boltzmann Machine and Self-Organization systems#.
* Active inference#.