<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Oddbit AI]]></title><description><![CDATA[Getting ready for the singularity.]]></description><link>https://www.oddbit-ai.org/</link><generator>Ghost 5.32</generator><lastBuildDate>Mon, 06 Apr 2026 12:30:12 GMT</lastBuildDate><atom:link href="https://www.oddbit-ai.org/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Bringing an Old Computer Back to Life: How AI Helped Us Recreate the Motherboard]]></title><description><![CDATA[<p>In the world of electronics and tech, we often feel a sense of nostalgia that inspires us to take on exciting projects that connect the past with the present. We recently decided to bring back the Slovene computer Iskra Delta Partner from the 1980s, and this project really captured our</p>]]></description><link>https://www.oddbit-ai.org/how-ai-helped-us-recreate-a-motherboard/</link><guid isPermaLink="false">64d8c1df6efb2d0001bbef60</guid><dc:creator><![CDATA[Miha Grčar]]></dc:creator><pubDate>Sun, 13 Aug 2023 14:32:50 GMT</pubDate><content:encoded><![CDATA[<p>In the world of electronics and tech, we often feel a sense of nostalgia that inspires us to take on exciting projects that connect the past with the present. We recently decided to bring back the Slovene computer Iskra Delta Partner from the 1980s, and this project really captured our excitement for the past. Our goal was to make an exact copy of the main computer board, which turned out to be a pretty detailed and challenging task. One of the big things we had to do was put more than 1000 small bridges, also known as &quot;vias,&quot; onto the board. In this blog post, we&apos;re going to share how we tackled this puzzle using <a href="https://en.wikipedia.org/wiki/Computer_vision">computer vision</a> to make the process more efficient.</p><p>When we were figuring out where to put the vias onto the board in <a href="https://www.kicad.org/">KiCad</a>, the traditional way was to place them one by one onto the grid while drawing lines (traces) to connect them. But with so many vias to deal with, we realized we needed a better approach. This led us to explore whether we could speed up the process by using artificial intelligence (computer vision). At first, we used an open-source library called <a href="https://opencv.org/">OpenCV</a>, which comes with some ready-made ways to recognize objects. It worked pretty well right out of the box. However, when we switched to different scans, our initial algorithm faced challenges due to factors like lighting, contrast, and resolution.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/08/image--16-.png" class="kg-image" alt loading="lazy" width="720" height="598" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/08/image--16-.png 600w, https://www.oddbit-ai.org/content/images/2023/08/image--16-.png 720w" sizes="(min-width: 720px) 720px"><figcaption>Our initial attempt involved using OpenCV to identify the vias. This approach showed significant potential. Unfortunately, this specific scan was too distorted to fit onto the grid. As a result, we had to create a different type of scan, which didn&apos;t work as effectively with this via recognition algorithm. (Credit: Toma&#x17E; &#x160;tih)</figcaption></figure><p>As we worked on preprocessing the images and fine-tuning the object recognition algorithm&apos;s settings, we simultaneously began creating our own method to detect the vias. This approach showed quicker progress and yielded better outcomes. The algorithm managed to locate and place around 970 vias quite accurately, marking a definite success. The algorithm&apos;s core concept is quite simple. We slide a small square window across the entire scan and attempt to figure out if the square contains a via. To accomplish this, we describe the square&apos;s contents with a <a href="https://en.wikipedia.org/wiki/Feature_(machine_learning)">feature vector</a> that contains <a href="https://en.wikipedia.org/wiki/Image_histogram">histogram</a>-like features. Before the algorithm runs, we manually mark several vias, which guides the algorithm on what to search for. Only 9 vias were manually marked, yet that proved enough for an almost perfect outcome. As the square window moves across the image, it compares what it &quot;sees&quot; to the known via feature vectors (its &quot;gold standard&quot;). When a notable resemblance is detected, the square&apos;s content is recognized as a via. This likeness is assessed using the <a href="https://en.wikipedia.org/wiki/Cosine_similarity">cosine similarity</a> measure.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/08/front-300dpi-cv.png" class="kg-image" alt loading="lazy" width="2000" height="1626" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/08/front-300dpi-cv.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/08/front-300dpi-cv.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/08/front-300dpi-cv.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/08/front-300dpi-cv.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>The grid-aligned scan with the recognized vias.</figcaption></figure><p>We also implemented the capability to repeat the recognition process iteratively and expand the gold standard with each round. Although this turned out to be beneficial, it wasn&apos;t the most significant improvement we made. We observed a substantial increase in accuracy when we introduced a &quot;tabu mask.&quot; This mask guides the computer on where not to search for vias. We crafted this mask using the traces (lines) from the scans, and it had a significant positive impact on accuracy and speed.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/08/mask-300dpi.png" class="kg-image" alt loading="lazy" width="2000" height="1626" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/08/mask-300dpi.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/08/mask-300dpi.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/08/mask-300dpi.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/08/mask-300dpi.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>The tabu mask that was used to produce the final result. This mask was created by extracting the black traces from the scans and converting them into a single 1-bit image.</figcaption></figure><p>What began as a fun experiment evolved into a significant success story. Through the use of computer vision, we were able to noticeably accelerate our work. While it did take some time to puzzle through the details, the effort was absolutely rewarding. This project effectively demonstrates how employing artificial intelligence can be a powerful approach for overcoming challenges while reverse-engineering an old circuit board.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/08/vezje-final-3d.png" class="kg-image" alt loading="lazy" width="2000" height="1073" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/08/vezje-final-3d.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/08/vezje-final-3d.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/08/vezje-final-3d.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/08/vezje-final-3d.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>The via recognition algorithm enabled us to get to the end result more quickly.</figcaption></figure><h2 id="resources"><strong>Resources</strong></h2><p>The source code of our via recognition algorithm is available <a href="https://github.com/OddbitRetro/IdpViaAI">here</a>. The entire algorithm is contained within a single <a href="https://github.com/OddbitRetro/IdpViaAI/blob/master/ViaCV/Program.cs">file</a> and is fairly easy to read and understand if you are familiar with the basic principles of machine learning.</p><h2 id="disclaimer"><strong>Disclaimer</strong></h2><p>The purpose of this post is not to present the latest and greatest technologies for image localization or object detection. This &quot;via recognition&quot; algorithm was developed with a clear goal in mind, and its further development was stopped once good-enough results were obtained. If you are interested in more recent and far more elaborate techniques, please consider reading about the <a href="https://en.wikipedia.org/wiki/Object_detection#Methods">neural network approaches</a>.</p>]]></content:encoded></item><item><title><![CDATA[How We Built Johnny - An Artificially Intelligent Virtual Agent]]></title><description><![CDATA[<p>In this blog post, we&apos;ll share our team&apos;s experience creating a virtual agent named Johnny during a recent one-day hackathon at work. Our objective was to develop a conversational virtual assistant capable of discussing <a href="https://bisonapp.com/en/">BISON</a>, our crypto and stock trading app, in both English and German.</p>]]></description><link>https://www.oddbit-ai.org/johnny-an-artificially-intelligent-virtual-agent/</link><guid isPermaLink="false">6400f3a26efb2d0001bbecb1</guid><dc:creator><![CDATA[Miha Grčar]]></dc:creator><pubDate>Mon, 06 Mar 2023 22:02:37 GMT</pubDate><content:encoded><![CDATA[<p>In this blog post, we&apos;ll share our team&apos;s experience creating a virtual agent named Johnny during a recent one-day hackathon at work. Our objective was to develop a conversational virtual assistant capable of discussing <a href="https://bisonapp.com/en/">BISON</a>, our crypto and stock trading app, in both English and German. We also aimed to design an aesthetically pleasing persona with vocal capabilities in both languages. Developing Johnny&apos;s visual representation was just as crucial as building his &quot;brain,&quot; and his appearance significantly contributed to the project&apos;s success.</p><p>Johnny&apos;s artificial cognitive abilities are based on a <a href="https://en.wikipedia.org/wiki/Generative_pre-trained_transformer">GPT (Generative Pre-trained Transformer)</a> model, a type of LLM (Large <a href="https://en.wikipedia.org/wiki/Language_model">Language Model</a>) that uses machine learning to produce coherent natural language responses to prompts. We decided to use one of the models provided by <a href="https://openai.com/">OpenAI</a>. They offer several <a href="https://en.wikipedia.org/wiki/GPT-3">GPT-3</a> models (GPT version 3), including &quot;davinci&quot;, &quot;curie&quot;, &quot;babbage&quot;, and &quot;ada&quot; (developed in 2020), as well as InstructGPT models (released in 2022) that can execute various natural language tasks such as content creation, question answering, and dialog generation, among others. Additionally, OpenAI provides <a href="https://openai.com/blog/chatgpt">ChatGPT</a>, an AI system optimized for conversational applications. However, since ChatGPT was not yet available through an API at the time of the hackathon, we decided to use the best API-accessible model available, which was &quot;text-davinci-003&quot;.</p><p>When testing the model, we noticed that it had a remarkable ability to stay within the context it was given. For example, when talking to the mischievous character Cartman from the TV show South Park, the model would provide responses that were consistent with Cartman&apos;s troublemaking nature. Similarly, when chatting with the absent-minded Homer Simpson, the model would generate responses that were in line with how Homer would usually forget his youngest baby girl.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/03/image-8.png" class="kg-image" alt loading="lazy" width="2000" height="1125" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/03/image-8.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/03/image-8.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/03/image-8.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/03/image-8.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Cartman is &quot;always causing trouble and stirring up drama.&quot;</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/03/image-9.png" class="kg-image" alt loading="lazy" width="2000" height="1125" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/03/image-9.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/03/image-9.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/03/image-9.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/03/image-9.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>&quot;Maggie? Who the F is Maggie?&quot;</figcaption></figure><p>However, when we introduced a more serious context involving a character named Johnny, a stock and crypto trader who uses the BISON app for his trading activities, we noticed that the model had a tendency to &quot;hallucinate&quot; or make up information that it did not have.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/03/image-10.png" class="kg-image" alt loading="lazy" width="2000" height="1125" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/03/image-10.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/03/image-10.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/03/image-10.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/03/image-10.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>The model &quot;hallucinates&quot; about how a BISON account can be topped up with a debit or credit card, Apple Pay, PayPal, or Google Pay, and how you can trade all sorts of fiat currencies and exotic coins on BISON.</figcaption></figure><p>Hallucinations are false facts that can be generated by AI models and are one of the biggest flaws of GPT models. They can occur due to a lack of necessary facts in the model, an insufficient understanding of the temporal component, or the inability to compute or infer facts. When faced with these gaps in knowledge and ability, the model may provide a likely continuation of the prompt instead of acknowledging its limitations and simply stating &quot;I don&apos;t know.&quot;</p><p>To prevent hallucinations, it is necessary to inject facts into the model to supplement its knowledge base. We had access to two datasets: the <a href="https://github.com/SowaLabs/AiOne2023/releases/download/v0.2.0/FAQ_embeddings.jsonl.zip">BISON FAQ Dataset</a>, which contained roughly 100 English and 100 German question-answer pairs, and the BISON Customer Support Dataset, which included over 90,000 conversations. Due to the latter&apos;s size, complexity, and sensitivity, we chose to use the FAQ Dataset for the hackathon.</p><p>There are two main approaches to injecting facts into a model: model fine-tuning or prompt engineering. Fine-tuning involves training the model on additional data to improve its performance on specific tasks. Unfortunately, OpenAI does not support fine-tuning for the model that we used during the hackathon. As a result, we had to rely on prompt engineering, which involves crafting prompts in a specific way to encourage the model to generate more accurate responses.</p><p>Our prompt consisted of several components. It started with a static context in which we gave the AI a name, profession, and purpose. Then we injected a chunk of the FAQ Dataset as well as the chat history with the user. Finally, we included a &quot;cue&quot;, waiting for the model to provide a likely continuation of the prompt. However, one of the main issues we faced was that the prompt had a limit of 4000 tokens, a token being something between a character and a word. We thus couldn&apos;t inject the entire FAQ Dataset or the entire chat history into the prompt. To overcome this limitation, we had to remove the most irrelevant FAQ pairs and a part of the old chat history. This required us to determine how much each FQA pair was related to the user&apos;s question/input. That is where something called &quot;embeddings&quot; came into play.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/03/image-12.png" class="kg-image" alt loading="lazy" width="2000" height="1125" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/03/image-12.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/03/image-12.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/03/image-12.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/03/image-12.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>The prompt includes static context, FAQ question-answer pairs (facts), chat history, and &quot;cue&quot;. The entire prompt must fit within the 4000-token limit imposed by OpenAI.</figcaption></figure><p>Embeddings are a way of representing words or texts in a numerical (vector) form. They are created by projecting texts into a high-dimensional space that is defined by a model tailored for semantic similarity. In this high-dimensional space, texts that are semantically similar would lie close together. This generally means that synonyms and even similar words, phrases, and texts in different languages would lie close together. The useful thing about embeddings is that it is possible to measure the &quot;closeness&quot; or similarity between two embedding vectors by using the dot product. This property makes it easy to compare different pieces of text and identify the ones that are most similar.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/03/image-7.png" class="kg-image" alt loading="lazy" width="2000" height="1125" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/03/image-7.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/03/image-7.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/03/image-7.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/03/image-7.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Embeddings: projecting texts into a high-dimensional space. Semantically similar texts lie close together in the space.</figcaption></figure><p>We used OpenAI&apos;s &quot;text-similarity-davinci-001&quot; model to generate embeddings. We embedded each question-answer pair from the FAQ dataset into the model space, creating numerical representations for each one. Additionally, we embedded the user&apos;s input each time it was provided. This allowed us to measure the similarity between the user&apos;s input and each of the FAQ items, enabling us to rank the items according to their similarity to the user&apos;s input. After ranking the FAQ items, we selected the top <em>N</em> most similar ones to include in the prompt. The value of <em>N</em> depended on how much space remained in the 4000-token prompt that we were constructing. Apart from the FAQ items, we also needed to include at least some chat history into the prompt. We decided to include (at least) the last three input-response pairs.</p><p>Using this methodology of prompt engineering with embeddings, our chatbot Johnny appeared smart and knowledgeable about BISON. But Johnny&apos;s appearance was also crucial to how people perceived and interacted with him. To make him more interesting, we implemented speech synthesis with <a href="https://cloud.google.com/text-to-speech">Google&apos;s text-to-speech API</a> and lip syncing with the <a href="https://github.com/DanielSWolf/rhubarb-lip-sync">Rhubarb Lip Sync</a> tool. We also packaged everything neatly into a web app with visuals, animations, and a user interface.</p><p>One interesting detail in our speech synthesis pipeline was the use of a language detector to know which &quot;speaker&quot; to request through Google&apos;s text-to-speech API. Initially, we used an n-gram-based algorithm for language detection, but later we decided to use OpenAI for this purpose as well. The speech (audio) produced by the text-to-speech API was then fed to the Rhubarb lip syncing tool. This tool takes speech as input and produces a string of lip-syncing cues. By showing a particular image of lips/mouth at a particular point in time, our Johnny appeared as if he was actually speaking the words.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.oddbit-ai.org/content/images/2023/03/image-14.png" class="kg-image" alt loading="lazy" width="2000" height="1125" srcset="https://www.oddbit-ai.org/content/images/size/w600/2023/03/image-14.png 600w, https://www.oddbit-ai.org/content/images/size/w1000/2023/03/image-14.png 1000w, https://www.oddbit-ai.org/content/images/size/w1600/2023/03/image-14.png 1600w, https://www.oddbit-ai.org/content/images/size/w2400/2023/03/image-14.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>The speech synthesis and lip syncing pipeline.</figcaption></figure><p>Johnny was undoubtedly the star of the evening. He impressed us with his knowledge and even led us into engaging conversations. However, Johnny is not a production-ready software and still hallucinates at times. In the future, we plan to connect Johnny to ChatGPT to enhance his intelligence. We&apos;re also considering similar tools to assist our customer support team. </p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="150" src="https://www.youtube.com/embed/SFoClThcGok?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Johnny AI - serious conversation"></iframe><figcaption>Serious conversation with Johnny.</figcaption></figure><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="150" src="https://www.youtube.com/embed/SmG9NZ-iZ1w?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Johnny AI - trolling"></iframe><figcaption>Not so serious conversation with Johnny.</figcaption></figure><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="150" src="https://www.youtube.com/embed/zPEjkxhVjGE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Johnny AI - speaking German"></iframe><figcaption>Speaking German with Johnny.</figcaption></figure><p>In conclusion, creating a virtual agent using a GPT model can result in impressive conversational abilities. However, these models can sometimes &quot;hallucinate&quot; or generate false information due to a lack of necessary facts. Injecting facts into the model through prompt engineering or model fine-tuning can improve performance. In this case, our team used prompt engineering, including embeddings to rank and select the most relevant FAQ items for the prompt. Despite the prompt&apos;s 4000-token limit, we were able to create an effective prompt that allowed our virtual agent, Johnny, to provide accurate information and have engaging conversations with users in both English and German. Overall, our hackathon experience showed the potential of GPT models for conversational AI and the importance of incorporating relevant data to improve their accuracy.</p><h2 id="resources">Resources</h2><ul><li>GitHub repository <br><a href="https://github.com/SowaLabs/AiOne2023">https://github.com/SowaLabs/AiOne2023</a></li></ul>]]></content:encoded></item></channel></rss>