Incentive for AI to AGI research

The proliferation of ML and LLMs along with various models has produced lately a new wave in AI research community, transcending in a long time, the laboratories of AI research, and beginning to touch the users directly, alas creating the AI hype 2023. On the other hand, beyond hype, we see it touching every domain and aspects of human life, up to the very core of creativity, tools engineering, life itself.
This awakened in us, researchers & enthusiasts, a mix of feelings that go from hype deferral, results questioning, concerns over AI ethics, and a wish to re-visioning the more than a half-century of AI multidisciplinary research. And for good reasons.

Beyond all the heat wave, we just cooled down and applied a bird view:
1. AI concepts were always around, in theoretical and practical infantries, since computers were invented*.
2. AI advanced from multidisciplinary angles. There must be a great deal of ideas research, with a bit of algorithm advancements from more sides.
3. AI came as necessity where humanly, with current tools, was no longer possible to handle certain aspects of data.
4. We shape AI, AI as tool will shape us. We believe for the good, for a great one network, for coexistence with it. Keeping humanity in the loop.
5. Our mission along contributing to AI is regulating it. Mitigate misuse. Thinking democratizing the human-aligned models.
6. Our other mission, is teaching young, what AI is, what it implies, what are the deep aspects. To mitigate over-simplification, to not let them pick easy fruits, for AI research is to decline, very steep now.
7. There is a converging point in sight, strong AI, but we must with a thorough research mindset, be engaged in the activity of compressing the human aspects to digital models.
8. Current implementations of LLMs are not close to human abilities but they do exhibit some interesting abilities (no, reasoning is not one of them.) A dreaming-like machine, which we prompt it to our logical space sometimes.**
9. That there is something though, that can be seen as powerful enough to change us, requires attention and resources.
10. I think the advances in the NNs architectures, generative aspects in language and media, it may lifted off a big former concern that this approach would have any good results. It has, for now, a big augmentation potential, for many domains, from artistic & writing to the engineering and programming side.

It is a step taken, that nevertheless made its call for us.
So here we are, our research incentive.

*Even before: “Turing drew from Wittgenstein’s 1939 Cambridge lectures the idea that everyday typings of concepts, our evolving “phraseology”, plays a fundamental role in the application of logic. After going to Bletchley Park, Turing continued to think about the importance of notations, and in “The Reform of Mathematical Notation” (1944/45) he suggested that symbolic logic opens itself up to a plurality of systems, attending to the specific uses to which notations are put, and Turing argued that we should take into account everyday language when constructing logical notations. This Wittgensteinian aspect of Turing’s philosophy of logic culminated in his 1948 report to the National Physical Laboratory, “Intelligent Machinery”, the founding document of AI” – as argued by researcher Juliet Floyd in: Wittgenstein, Turing, and AI.

**We try not deceiving ourselves: It is meaningful to ask whether one had a dream that one does not remember, to the same extent of investigating what a quantum state might be before observation. An example of one class of less meaningful examples we should frame as a research limit mindset – do not be lured to the “beyond,” meaningful things are nearer than we think, it is hard not to deceive oneself!