From Simple Patterns to Sentience’s Complexity

“In order for AI to be able to overtake a programmers job, implies that the client knows what he wants. We’re safe…”
This job related predicament rises, as I see it, only in a brief moment in a much broader natural course. It is the kind of meme that strives to survive, it probably deserves a definitive NFT minting as of 03.2022 before it will soon fall into oblivion.
I want to start tackling its survival efficiency by talking about the AI and its power as I see it from this stand point: in which in the past decade the AI’s potential reached only its first steps in its infancy with clear signals to world changing capabilities. This goes further down, through refinements (of its rightly chosen) ontology.
Preamble
We live in a world (or better said, this is the way the world works) in which almost everything employs from within some mechanics of refinement and adaptation. A machinery that moves things towards escaping chaos – and this with great energy consumption – but energy that partly comes from within (as the “will” opposed to the “death drive”). This goes from the inherent patterns in nature up to the end spectrum of the human activity(1). The underlying battle between order and entropy(2) on the underlying surface of our particular(3) universe with its laws. And above this, as the layer of simple emerging patterns to the ultimate, macro refinement substrate of sentient manifestations, and on top of it with the layer of symbolic, cultures and abstract thinking. A predictable pattern alright, further on rising within the brain energy patterns, a culmination tip, that leads to the creation of synthetic worlds, artificial sentience, transcendental states of being in the digital universe, as the next steps. A macro, ever growing vertical ontology at work.
Refinement
Within this broader context the refinement produces further deepening of the domains and within AI domain, the current advances allow neural training of some larger than ever data sets, like in the language, vision, with a touch of symbolic, towards incipient meaning. In the context of programming we see first results in training upon some good part of all the human written programming code. And that we are able to put that to work in the business requirements with the programming languages on the real use cases (necessary for a program to have a purpose) for now in the form of AI assisted programming(4).
And further refinement would lead to a more natural way of conceiving programs through language processing of the requirements, from the problems to the actual code generation. And with, again, a further refinement into the symbolic AI with the actual predictable outcome not by only answering the questions, but with solutions offered by AI prior asking the question(5). All that within a domain criteria based on programming/AI ethics, best practices solutions, security, cultural impact, etc.
Symbolic
On the side of symbolic AI at this time there is an upward trend of trying different models of processing, a process in itself that requires further research. At the same time I see that this process is hindered by the fact that the models are still mapping or try mimicking some partial models of the mind, of trying to explain how brain works, and by posing answers to the questions related to consciousness(6).
I am still on the path and researching on my own symbolic model within the essentials, unspoiled concepts advanced through the innovative approach by Ludwig Wittgenstein:

“The reason computers have no understanding of the sentences they process is not that they lack sufficient neuronal complexity, but that they are not, and cannot be, participants in the culture to which the sentences belong. A sentence does not acquire meaning through the correlation, one to one, of its words with objects in the world; it acquires meaning through the use that is made of it in the communal life of human beings.”

There is not only – many would call with yesterday’s standards as a “grim” future – but there is in fact to be reminded that one cannot oppose the refinements because it requires also effort and energy none is possessing enough. Through self cultivated death drive that will only help on the short run…so remember this meme and laugh at its NFT later.

(1) forms of life with language games adaptation, creation activities with continuous refinements of their ontological models, circulating concept cultures.
(2) with simple patterns from which something emerges and with the counter action of opposite forces from nature up to the psyche and symbolic, the death drive.
(3) multiverse theory, in which very briefly explained: the eternal timeless energy waves produces bubbles of universes each with its fundamentals.
(4) copilot software that has the basis all of the github source code.
(5) if we have the right domain question we have the answer, in that the answer is there, it is only that briefly something is obscuring it from view.
(6) on questions related to the knowledge of ourselves, which in fact, are not of scientific nature.

C. Stefan / 24.03.2022

Big Data Pitfalls

Avoid Simpson’s paradox:
This paradox refers to a phenomena where the association between a pair of variables (X; Y) reverses sign upon conditioning of a third variable, Z regardless of the value taken by Z. If we partition the data into subpopulations, each representing a specic value of the third variable, the phenomena appears as a sign reversal between the associations measured in the disaggregated subpopulations relative to the aggregated data, which describes the population as a whole.

Right ML algorithms usage: use the right approach for machine learning algorithms, find the appropriate algorithm for your specific problems. Ex. If you need a numeric prediction quickly, use decision trees or logistic regression.

Keep in mind the Prisoner’s Dilemma: like in “cigarette manufacturers endorsed the making of laws banning cigarette advertising, understanding that this would reduce ad costs for parties and increase profits across the industry”, so it is with the business strategy and down to big data processing.

Consider Gödel’s Theorem: any system of computation you can construct (numbers theory etc.) that it is true, it cannot be ultimately proved from the rules within that computational construct. The system in a way transcends itself. Thus the way to the strong AI for example.

Keep in mind the exponentially powerful quantum computers of the future. For example build different, resistant cryptographic algorithms against the qubits future powers.