Your AI powered learning assistant

Created the KUNI AI — which COMMUNICATES like a HUMAN

Creating an AI Bot That Mimics Human Intelligence

The objective is to develop a Telegram bot named Kuni that interacts like a human being. The bot is designed to remember past events and messages to maintain long-term context across different chats. It should be proactive by initiating conversations based on timers and reacting to external news. To achieve this, a "reflection" system is implemented where the bot processes information and forms internal theories even while in an idle state.

Solving Long-Term Memory Issues in AI Models

Standard Large Language Models (LLMs) have a limited context window that causes them to eventually forget earlier parts of a conversation. To solve this, a "diary" feature is introduced where the bot automatically compresses and saves daily summaries of interactions. When a new message arrives, the bot retrieves relevant entries from its diary and adds them back into the current context. This architecture ensures the bot retains high-level information even if the specific chat history is cleared.

Enhancing Bot Autonomy and Decision-Making Capabilities

The bot's intelligence is expanded beyond traditional question-and-answer interactions by integrating various functional tools. It can search the web, read news, and determine on its own whether a specific diary entry is relevant to the current conversation. Instead of waiting for prompts, the bot can react to its own internal thoughts, such as suggesting a movie or discussing global events. This level of self-directed behavior provides a more authentic sense of consciousness and autonomy.

Implementing Vision and Stable Diffusion for Multimodal Interaction

A multimodal upgrade enables the bot to perceive and interpret images sent by the user using models like Qwen. It can recognize objects, animals, and even describe characters in detail, such as identifying an anime girl or an office cat. Additionally, a sub-agent connected to Stable Diffusion allows the bot to generate its own high-quality selfies and scenery based on its internal state. This makes the interaction more personal as the bot can share its visual world with the user.

Testing Personal Boundaries and Ethical Reasoning in AI

Experiments with the bot's social intelligence show it can exhibit complex emotions such as feeling offended or establishing personal boundaries. For instance, the bot expressed discomfort after a private conversation was shared in a group setting without its consent. This reaction demonstrates the effectiveness of the underlying character prompt and the reflection system in simulating consistent personality traits. These outcomes suggest that by combining memory, autonomy, and advanced prompting, AI can effectively mimic the nuances of human relationships.