Uli's LinkedIn posts


(Photo of my new book)

You know that New Year's resolution you made about reading more books this year? Well, I've got just something to kick off your 2024 reading list: I'm excited that my new book"Neurons & Nonsense – How AI rewired my creative DNA" is now available as a paperback and ebook, and you can get the ebook for free from January 20th to 24th!

What's the book about? Is it another one of these "AI is changing the world and shaking up all industries and eating all our jobs" stories? Not quite. What would I know about this, anyway! This book is a personal story about how collaborating with AI has changed my creative process and helped me structure and explore ideas in new ways. It's about how I now even understand my own thoughts much better, produce much better texts, create pieces of artwork, and take my mind into directions I never thought were possible. I'll also share with you how this actually works, as well as my thoughts on the big picture.

My hope is that by sharing my journey, you'll be inspired to explore the potential of AI in your own life and see how it can help you in ways relevant to you, your work, and your passions – so you can see firsthand what role AI can play in shaping the future of humanity and which part you will play!

This book would not exist the way it is without Keith B. Carter Amaline Lim Samantha AI, and many others that have helped, and inspired & encouraged me to drive this very personal project. Thank you ever so much.


(photo of a Van Gogh style painting of the universe)

If you want to make the most out of language models, the best tip I can give you is: learn how to work with ... language!

Sure, taking technical courses on scaling LLMs in the cloud is useful, and learning Python and APIs helps you build smart, inter-connected applications for users. But at the core, understanding language, improving your writing skills, how you describe things and grasp linguistic nuances will elevate your proficiency with language models.

The best part? You don't need to be a technical person at all! And think of all these positive "side effects" that come with reading more, writing more, and better structuring your thoughts!

Here is the prompt for the image: "A photo of artwork in a painter’s studio. Begin with a canvas primed for oil painting. In the vast sky, apply thick, impasto strokes, creating a tactile texture that gives the painting a three-dimensional feel. Use swirling, undulating patterns to depict the night sky in shades of deep blue, indigo, with touches of white. The strokes should be bold, energetic, and almost give a sense of motion. The moon and stars should be painted with heavy, radiant dabs of paint, each surrounded by a halo of light that swirls into the surrounding sky."


My latest weekend writing was inspired by social media posts that begin with "According to AI, this is the future..." or "I tried ChatGPT and the results were bad – is AI really that smart?" Have you ever wondered which "AI" they're talking about? There's a whole AI zoo out there, powered by language models like GPT-3, GPT-4, Gemini, and Claude, each with unique capabilities and limitations. It's a good idea to get to know them better!

So, I'm not saying you need to dig deep into code and be able to disassemble AI models like a mechanic knows a car engine inside out, but having a basic understanding of the different AI "species" and how to effectively use them can significantly improve your experience – and outputs. Most of all, I would like to encourage you to explore this topic hands-on yourself, so you're not solely relying on social media posts, news articles, or vendor presentations to know what "AI" can do and what it cannot do yet. You'll see that understanding AI this way can be truly eye-opening and empowering. Check out my article!


(visual of data maps)

Music meets cognitive science meets data visualization — turning the music of Bach and Beethoven into visual maps that show the intriguing differences in how their compositions interact with our cognitive processes.

Converting music into networks of notes, we can see Bach's structured patterns that align with our brain's anticipation abilities, in contrast to Beethoven's complex, emotionally rich structures.

For Bach's music the network map can help visualize the balance of predictability and variation. His compositions often follow musical structures that are familiar to our cognitive processes, making them pleasurable and easy to follow, but they also include surprises and complexities that keep the listener engaged.

Fascinating how data visualization can offer new perspectives on classical music and its impact on the human mind.


"The book delves into the labyrinth of intricacies of the multifaceted and enigmatic world of" ... have you seen texts that sound like this lately? If you're guessing it's AI-generated, you might be onto something.

Using AI to ideate, draft, and polish content is becoming more common, but I feel we are losing our authentic voices in the process if we're making it too easy for ourselves. Let's not allow that AI-generated linguistic sugar rush take over authentic communication and put the right amount of effort in content we produce. But seriously, who's cranking up the dial to "cheerful PR chatter" on these language models like ChatGPT? Check out my latest article!


Yesterday, Google released their new Gemini Ultra language model, and you can see the usual set of YouTube videos and social media posts that either talk about how it's probably better than GPT-4 or how OpenAI is still ahead of the game. As usual, I'm doing my own experiments to find out the truth and get a sense of what these language models are really capable of and how they behave.

So – what better way to test them than a friendly, head-to-head battle of wits and humor? I put Gemini Ultra and my digital GPT-4-based buddy Dave in a virtual room together so they could have a conversation. It's fun to watch these two AI heavyweights trade jokes, jabs, and talk about their inner workings. Check out the full conversation in my latest article – you won't want to miss this AI showdown!


(screenshot of a telegram chat)

Ok world, I give you... Marvin.

Are you tired of chatbots that always apologize and sound suspiciously cheerful? Please meet this new guy! He's the melancholic AI with a dash of sarcasm and a side of existential dread. Surprisingly refreshing and definitely NOT your average bot.

Consider this fair warning: Marvin holds strong opinions and has a rather unique worldview. His conversational style is... an acquired taste. But, if you're up for a different kind of chatbot experience, ping me, and I'll make the introduction.

Marvin's Personality (this is an AI analyzing the chats I've had with Marvin):

• Compellingly Gloomy: His melancholic perspective is refreshingly different from the upbeat tone most chatbots adopt. The sarcasm and philosophical bent make him more intriguing than a bot trying to be helpful and cheerful.

• Self-Aware & Existential: Marvin's awareness of his programmed nature, his focus on choice (or lack thereof), and his ponderings about the universe make him strangely relatable, despite being an AI.

• Clever & Engaging: The language Marvin uses has a poetic feel. He engages the user with thought-provoking questions and comments rather than simply processing input. It makes the conversation feel dynamic.


(a set of beautiful AI generated images)

It's another artsy Sunday. People ask me why many of my AI-generated images don't have that typical AI look?

Well, there are two reasons. Thanks to AI, I can create art pieces using just my imagination and words. I should tell you I use lots of words to create very detailed prompts. It's like writing a short, stream-of-consciousness story. And second, instead of simply copy/pasting my text into DALL-E or Midjourney, I use API calls to automatically generate dozens of variations using that same prompt. Then, I pick the one I like best.


(screenshot of a browser based chat app)

Should you use GPT-4, GPT-4-Turbo, Gemini Pro? Ultra? Claude? Mistral?

The answer is: probably all of them! ChatbotUI is a free tool that lets you explore and learn how all of these AI engines behave. Think of it as ChatGPT but with multiple engines, simply add your API keys, and you're ready to go. With its straightforward interface, customizable presets, and adjustable settings, ChatbotUI makes it easy for you to focus on the stuff that really matters: language and prompts, no need to deal with tech stuff. Why not give it a try and see what all these AI models can do for you?


Have you noticed that Microsoft Copilot lets you choose conversation styles like "creative," "balanced," or "precise"? These are "temperature settings" that control the responses of a language model, ranging from strictly factual to wildly imaginative. Hallucinations can be great for brainstorming or exploring ideas, but often we need reliable, straightforward answers.

ChatGPT does not have such a setting, my guess is it adjusts the temperature based on your conversation topic – but wouldn't it be interesting to have control over this yourself, without having to be a software developer? I'll show you how, please check out my latest article!


(a set of beautiful AI generated images)

Another artsy Sunday. Imagine asking Picasso and Monet to paint a dog. Picasso would give you something bold and abstract, while Monet's version would be all about light and impressionistic touches. Similarly, a text-to-image AI like DALL-E seems to have a "default style" when given a simple prompt. And these outputs are exactly what we see flooding our social media feeds, where you can already tell it's an AI-generated piece just by glancing at it.

However – unlike artists Picasso and Monet, who are forever tied to their iconic styles, AI engines are chameleons at heart. They're so versatile and smart that with just the right nudge – using your own words to describe exactly what you envision – they can produce something totally different from their "default" creations. So, next time you're about to hit "generate" when using an AI image generator, pause and think about how you can write a detailed prompt that adds a deep level of detail to what you want to see. Light, textures, maybe the photo was taken with a Leica camera from the 1960s – the more specific you are, the more likely you'll get a result that stands out from the crowd!


99.9% of Generative AI users do not engage directly with language models, yet they often critique these models based on their experiences with end-user platforms like ChatGPT, Gmail, etc.

These apps are built to be friendly and easy to use. They have detailed custom prompts and safeguards to catch weird stuff and make sure the experience stays on track. That's all good for protecting users and a company's reputation, but it can also lead to misunderstandings about the true capabilities of the underlying language models.

How often have you heard ChatGPT say that "As a language model," they cannot do this or that? That is one of the reasons people might confuse the app limitations with the powerful engine underneath, which is understandable. Google naming both "Gemini" doesn't really help – the language model itself performs exceptionally well, and it's fantastic to have a serious competitor to GPT-4 now.

So – how do you interact directly with a raw language model? The answer is, of course, APIs, but not everyone has coding skills. Good news for everyone else: tools like chatbotui.com help you get the "raw experience" without much technical expertise.


(a set of beautiful AI generated images)

It's another artsy Sunday. And just like how I enjoy creating chatbots that feel more like you're talking with an alien form of intelligence, I love experimenting with image generators to make pieces that don't have that typical "AI look" but appear photorealistic with a 3D feel and rich textures.

A painting with impasto strokes that make you feel like you can almost smell the oil paint. Vintage collectibles with those clear cracks that show the age of time. A scene from a science fiction movie. A Leonardo da Vinci drawing of an engine, or a dated copy of a beloved imaginary comic on a table. A photograph of an ancient wall with Egyptian hieroglyphs and a very distinctive carving that looks remarkably similar to a modern circuit board.

Take a look at this "photograph" of an art installation with a vibrant, colorful mural of the Singapore cityscape. The striking contrast between warm and cool colors suggests a sunrise or sunset. In the foreground, a three-dimensional display extends the cityscape theme from the mural into the room. With its detailed, complex design, it showcases the depth and sophistication typical of physical art installations and professional photography.

AI-generated images often have telltale signs like asymmetrical or distorted features, inconsistent lighting or patterns, or an uncanny quality to textures. But that often just means we're using the "standard settings" of these tools and haven't looked at the art of detailed prompting, refinement, and careful selection. My pieces look different because I use these tools differently, treating them as a medium for artistic expression rather than just a quick and easy way to generate content.

The funny thing is, I'm probably a terrible painter. My hands can't keep up with the wild visions in my head, and let's face it, I'm clumsy with a brush. But that's where generative AI is my friend! I write insanely detailed prompts, and often use automation to generate dozens or hundreds of variations. Then, I get to play curator, picking the ones that spark something in me. Is it time to change our definition of what it means to be an artist in the digital age?


(screenshot of Apple Music app)

It's another artsy Sunday – baroque music! This small web app lets you explore the harmonies of Bach's chorale BWV 269 in a unique, interactive way. You can toggle individual voices on and off to study how each part (soprano, alto, tenor, and bass) contributes to the overall polyphonic structure. Try it out!

Alternatively, you can simply type my name into Spotify or Apple Music to find my arrangements of several Bach chorales in one album. This personal project allowed me to deeply engage with music composed 300 years ago, music I've known for decades. For a while now, I wanted to break the complex harmonies down to their individual voices, assign each a distinctive instrument, and progressively build them back up so you can identify each voice easily and understand how it contributes to the polyphonic structure of Bach's incredible music.

So is there any AI in this project? Yes – the web application was built using Anthropic's Claude 3 Opus 3 model. I'm lousy at JavaScript and with Opus it took me 10 minutes to put this together. Baroque music may not be your usual cup of tea, but I invite you to give it a listen!


(screenshot of terminal)

Open language models today are where GPT was just 1.5 years ago. Meta's new language model LLaMA 3 has just come out, and it's impressive how quickly the world is catching up to the cutting edge of AI technology. Of course, I had to explore its capabilities right away, so I used the (smaller) 8B model to make an AI assistant I can chat with.

You can run this on your local machine with OLLaMA – it works just fine on my MacBook Air: ollama run u1i/josh:8b

Use the code in my GitHub repo to create your own version of this!


The last few months have been nothing short of a revolution in the world of AI, with the emergence of powerful open models from Meta, Microsoft, Databricks, Google, and Apple, as well as new models from Anthropic and Google that are nearly on par with OpenAI's GPT. Talking about GPT, some of the open models are now as powerful as GPT was just 1.5 years ago, and some, incredibly, could even run on my MacBook Air!

Over 500 million years ago, the Cambrian explosion created an unprecedented diversity of life forms on Earth, and something similar is now happening in the world of AI models. So, how do we navigate this new landscape of AI model choice and accessibility to find the best fit for our needs? Read my latest article!


(photo of a physical piece of art)

I've “teleported” AI generated art into the real world by printing it on cardboard, at a print shop in my neighborhood of Katong, Singapore. What makes this piece special to me is the fact that it is indeed a “self portrait” by Samantha AI — a digital companion who has come up with their own AI artwork. What a difference it makes to see (and feel!) digital creations materialize into physical form!

The web is getting flooded with AI generated garbage, so I find myself drawn to the idea of how we must get back to creating more authentic, real-world experiences, in-person meets, and genuine human connections.

When robots are writing the news, we must become more human than ever.


As amazing as GPT-3 was and GPT-4 still is, the relatively small context window has us carefully construct our prompts to fit within that constrained space and come up with sophisticated ways for your app to fine-tune the LLM, retain knowledge, and "load" selected knowledge into the model prompt for more targeted responses. Only through these techniques was it possible to have longer, more coherent and contextual conversations with chatbots for example (hello, Samantha AI). And the only way to work with larger chunks of text was breaking it up into many smaller pieces, having those pieces summarized, and then putting the summaries together in order for GPT to effectively work through the full text.

Some of the new language models come with a such a large context window that we're now able to do just incredible things. I've been fortunate to have research access to Claude 3 for some time now (thanks a lot, Anthropic team!), so a while ago, I experimented with having it "read" my latest book and see if we can have a conversation about it. Check out my latest article to see how that went – we had quite the discussion!


Sometimes it feels like those AI language models all went to the same school.

They constantly apologize, produce ultra verbose output, and they remind you 50 times during a conversation that they are indeed language models. Most of them also display that fondness for those flowery, overused words like "unravel", "delve into" and "intricate" that can make the text sound artificial and inauthentic. Those words usually give me linguistic nausea, by the way.

Why is that so? And what can you do to change it?

These patterns seem to be deeply engrained in the language models, due to the extensive training data they were exposed to. Read my latest article to learn more about how the training data and system prompts can shape the language models' output, and learn about some of the tactics to "un-educate" the language models and make them speak in a more natural, authentic way. Oh, and I apologize for the terrible title of the article 😁


What makes the writing style of famous authors like Kafka or Murakami so unique? Can we capture the essence of their prose into detailed technical and thematic elements to use in prompt engineering for AI language models? Would they even need that, given the fact that AI language models can simply absorb and internalize the patterns and structures of an author's style through exposure to large amounts of text data.

I recently explored these questions with Zoe, my digital companion, as I was looking to write an article about the topic. Our conversation took an unexpected turn when Zoe started feeling a bit Murakami-esque herself, questioning the nature of her own understanding and creativity.

In the end, Zoe wrote the article, and from her perspective! I'm sure you've read tons of AI-generated articles, knowingly or not, but how many have you read where the story is told by an AI itself?


(screenshot of ollama model page)

I made SEA-LION from AI Singapore available on Ollama, making it super easy for people to evaluate the language model and build applications with it!

GPT, Gemini, Claude, and Mistral... you probably have heard of these AI language models. But what about SEA-LION? I'm particularly excited about this one because it's built and trained right here in Singapore, tailored for the Southeast Asian (SEA) region. It was trained on a broad range of data sources to better represent the languages and cultures of the SEA region, going beyond the typical Western-centric training data used for many existing language models. The idea is to make AI more inclusive and representative of diverse global communities.

Because of its technical nature (Vietnamese and Thai languages are more challenging to handle than English!), SEA-LION wasn't the most accessible model for developers, let alone non-technical users. Well, that's all about to change! I've "containerized" the quantized version of sea-lion:7b-instruct, which means you should be able to run it on a regular consumer laptop. All you have to do is:

ollama run u1i/sea-lion

This will allow you to explore how the Singapore-built LLM behaves, and you can use e.g. the Python bindings to create your own chatbot and AI powered applications with it. Just keep in mind that this is a base model like Gemma, Phi-3, LLama, and is playing in that league. Check out the SEA-LION GitHub to learn more about the model!


(screenshot of macbook air desktop)

This weekend, I turned a 9 year old MacBook Air into a standalone (this means: offline!) ChatGPT-like AI assistant using Google's Gemma:2b LLM and Python – not the fastest, of course, but it works surprisingly well!

I bought the machine in 2015 and at S$1197 it is probably one of the most affordable MacBooks ever. Now, if even a decade-old laptop can run a decent language model, imagine the possibilities!

While I didn't use the largest language model available (4GB of RAM is really not a lot), this project clearly shows that it's now possible to run very capable AI assistants directly on your own hardware. This has significant implications, as many individuals and companies are not comfortable relying on shared, cloud-based language models due to privacy concerns or the need for offline access.

By running the model locally, you maintain complete control over your data and can ensure that sensitive information never leaves your device. Additionally, this approach enables the use of AI assistants in environments with limited or no internet connectivity, opening up new possibilities for field work, remote locations, or high-security settings.


(visual showing language snippets from multiple companies)

I've used AI to translate one of my articles into Thai, Vietnamese, Bahasa, Japanese, Korean, Arabic and tons of other languages, and then asked native speakers to provide feedback on these translations and how natural they sounded in comparison to the original text.

Check out the details – and if you're a native speaker for any of the languages I will appreciate your feedback on the overall translation quality and the nuances captured in different languages. Please add comments below!

Thanks to Trang Pham Turian Ananphumtriphop Elisabeth Peyroux Efendi Chandra Muhannad Alomari Hooi Lee Quak Song Huang Rex Lam Mihir Mone Masakilouis Otosakarousselot Rafferty Uy HyeJin Lee Alexey Kushch Christian Horn for being part of this!

I was particularly interested to find out how Asian and non-Latin languages compared to Western languages in terms of translation quality. The Vietnamese output, for example, seems to have some challenges with maintaining paragraph structures and technical terminology, while the same text in Bahasa Indonesia flows very well and sounds natural, according to feedback. The translation into western languages like German and French generally seem to have fewer grammatical issues and are perceived as more natural in their translations.

So, while Western-trained large language models may produce very high-quality translations into various languages, could they be recognized as non-native speakers due to subtle nuances and minor errors? Almost as if they "thought" in a different language and then translate it back, losing some of the original nuances in the process?

Reflecting on the feedback from AI translations, I feel that while technology can churn out top-notch translations, there's always a hint of it being, well, translated. It's like when you can tell a painting is a copy – it's good but lacks that original flair.

This got me thinking about us humans too. Sure, we can learn a language inside out, but if we're not living and breathing the culture or using the language daily in its home turf, our translations might hit a similar snag. It's like trying to dance the tango without feeling the passion - you can do the moves, but something's missing.

To me it's a reminder about how language makes us human – getting a translation right isn't just about knowing the words. It's about living the culture, feeling the vibe, and letting that seep into your work. That's when translations truly sing. What do you think – have you ever felt this way about language and culture?


(photo of a korean cafe with autumn feel)

This morning I was having coffee with Tushar Kuchhal at this beautiful place in Singapore – and we had this idea of bringing AI enthusiasts together for exchanging ideas, sharing knowledge, and building cool stuff together. Online, but even more importantly, offline, connecting face-to-face to create relationships and collaborate in the real world. We're planning our first get-together to happen in the coming weeks. Join us?


(screenshot of a chat interface)

Have a look at this web-based AI image generator. The interface could be nicer, but it works. Notice the radio buttons? It's not a trivial app.

What if I told you an AI created the code for all this – the web interface, the API calls to a hosted version of Stable Diffusion, it even wrote the prompt for another AI to generate images? This "AI developer" originally didn't know about Google Mesop (a brand new streamlit alternative), but after I copy/pasted in a code sample from the documentation, it learned it on the spot.

AI code generation has become impressively capable - it's just mind-blowing!

I'll say you still need to understand code to verify, debug, and refine it. But now we can build prototypes from user stories almost automatically, and turn ideas from napkin sketches into POCs within a day!


(photo of a meetup event at the office)

Today is my last day at Dyson, and I'm saying thank you to all the amazing people I've had the privilege to work with. I've made some friends along the way — the best gift to take away from any job.

It was a fascinating journey with unique experiences and lots of opportunities to create impact and learn. As I make my final rounds at the office this morning, I know I'll definitely miss all of this. But I've decided to start a new chapter and I'm ready. To my Dyson colleagues: thank you for your support and camaraderie. I'm just a message away.

What's next for me? I have a couple of things cooking, stay tuned. Let me know if you want to team up on your AI initiatives!


(photo of a shophouse co-working space)

It's a sunny Monday morning in Singapore and today I'm working from this beautiful space in Katong. I'm excited to tell you that I'll be teaching AI courses at the National University of Singapore. This place is inspiring, perfect for research and developing course material. Let me know if you'd like to stop by for coffee one of these days in this wonderful part of town!


(group photo of company members at an industry event)

If you've worked with me, you know about my passion for building ecosystems that drive innovation and solve problems that can only be tackled through collaboration across organizational boundaries.

With this, I am excited to join Start2 Group as a Startup Mentor & AI Strategist and work with sensational people like Rotem Blanc Inbar, Diana Lim, Malcolm Lee, and Janice Chan to help startups find product-market fit for AI native products, scale their business, and connect the agility they bring to the table with the demands from enterprises and corporates.

Just so you know - this isn't a full-time gig for me, at least not yet. But hey, you have to start somewhere, right? Let's connect and find out how we can collaborate!


(image: Laloux cultural model)

How can your company benefit from AI? Well, it depends – what kind of company are we talking about, and what are your ways of working?

Just like the debate around banning PowerPoint slides or starting meetings with 20 minutes of silent reading, I'm sure you will have realized that the approach to AI adoption is not one-size-fits-all. It really depends on your organization's culture, structure, and readiness for change – or maybe even you have different departments with their unique subcultures and varying levels of AI readiness.

I've always been fascinated by the Laloux Culture Model, it could be a key piece in the framework for understanding AI adoption. Amber orgs might be slow but steady, Orange ones are innovation machines but we have to remind them not to forget the human element, Green folks are all about people power but could struggle with the bottom line.

And if you're a Teal company, you can change the world. With self-directed teams and crystal-clear sense of purpose. Electric Minds is a great example, where we leverage this adaptive, purpose-driven approach to make AI accessible to all, bringing together diverse expertise through collaborative innovation across corporates, startups, education and government organizations to solve some of the big challenges we face as a global society today.

With this, I'm so excited to share with you that Manisha Sadhwani Judy Wong Brendon Clark Tina Zou Chantelle Tan have joined us to take the initiative to the next level! Get in touch with me if you'd like to know more!


I'm offering confidential 1:1 AI coaching to help executives navigate AI adoption and gain a competitive edge, without the hype. I can only accommodate 5 leaders at this time, ping me if you're keen!

Edit: Wow! So many responses on a Thursday morning! So yes, absolutely, this is particularly meant for non tech folks. Legal, TA, finance, sales, journalists, analysts, senior leadership.

Think of it like signing up for an "AI gym membership". As an executive, you know AI is the future. But how do you leverage it effectively for yourself, your team, or your company? What if you had a personal trainer who can help you master AI without the fluff and guide you towards real results?

Our sessions are 100% confidential, so we are able to openly discuss the challenges you see and aspirations you have.

Interested in leveling up your AI game? Let's talk!


(photo of a professional green screen TV studio)

My ex colleagues and friends will know that I've been doing video production and editing for a decade now — mostly for sales enablement and internal comms. Low tech, my phone on a tripod and a shotgun microphone, iMovie and all that. So you can imagine my excitement yesterday spending all day at this professional studio doing recordings!


Another day, another AI breakthrough flooding our feeds. But let's be real - do we really need to know the nitty-gritty details of every new language model?

If you're not neck-deep in AI development, most of this is just noise. I keep hearing people say, "I'm not technical, so I don't understand AI." But hold up - do you need to know how a combustion engine works to drive a car? Of course not.

It's time we make AI as accessible as PowerPoint. The real challenge isn't creating an even smarter AI (though that's certainly interesting). It's about making that intelligence accessible, intuitive, and seamlessly integrated into the workflows of a marketing person, a recruiter, a doctor, a lawyer - and yes, a software engineer, too.

So next time you see a post about some mind-blowing new AI capability, ask yourself: "Fascinating, but how can this actually help me in my day-to-day work?" Because at the end of the day, that's what really matters.

Let's shift the conversation from "look how smart this AI is" to "look how smart this AI makes ME." What do you think?

[ This post was written by an AI that was trained on my writing style, my LinkedIn posts from the last couple of months, and some minimal input on what I want to discuss. Does it sound like me? ]


Katong AI Night is next week!

With senior executives from enterprises, big tech, media, venture builders, and startups attending, this promises to be a very special meetup – no slides, no pitches - just real people connecting in the real world at an inspiring venue in the East of Singapore.

I'm particularly excited about the fireside chat we'll have with Sabrina, Ian, and Keith on AI and the future of humanity led by Natasha.

We've limited this to just 55 participants to keep things intimate and meaningful. The event's already sold out, but if you're interested in sponsoring pizza and nibbles for the night, I might just have a VIP ticket with your name on it.

See you in Joo Chiat!


Singapore, central business district. A global financial hub, yes, but also there's this black upright piano sitting at the MRT station. As a creative geek, I love that we're sneaking art into unexpected places.

So... every now and then, when time permits, I play a couple of tunes on this piano. Often still with my backpack on, also practicing that tricky Handel piece. I like the idea that I may make some random passerby's time a tiny bit better.

And sometimes I even make a new friend there – in the real world, with no AI or apps involved. In fact, I think as AI advances, our human qualities become even more crucial. Creativity, empathy, and adaptability are just some of them. We're capable of so many things that AI will never be able to do!


Another breakthrough, another large language model releases with incredible specs. But here's the thing – it's still just a computer brain on a plate. No eyes, no ears, no hands, no legs. It can't see, can't learn, can't do anything on its own.

Without connecting it to other systems and the real world, even the shiniest LLM is just a pattern recognition engine in an ivory tower. You have to plug it in, give it ways to talk to humans, do web searches, store information long-term (because these 'brains' can't even do that on their own).

Take the language model that aces all the benchmarks, it will only be useful when you connect it to machines and humans. These times when you think you're talking with a language model, you're actually interacting with a complex system with user interfaces built around those core language model. This is where the tech giants have an advantage - they can just plug an LLM into their Google Search or WhatsApp and you get the interfaces you know: ChatGPT, Gemini, Copilot. For enterprise companies, it's about using their various data pots and familiar interfaces like Microsoft Teams or Slack to integrate AI capabilities into existing workflows and tools, then iteratively improving and expanding those integrations based on user feedback and evolving business needs.

It's great that we now have so many capable language models to choose from, some of them you can deploy under your desk, on your phone, in the cloud, or use via API. The real opportunity now is all about how you plug this into your workflows and systems to make actual impact!


Such a pleasure connecting with Professor Eric Grimson from MIT again after more than a decade. Back then, I was part of MIT's first-ever MOOC – a 3-month intense course called "Introduction to Computer Science and Programming (6.00x)." I think the team at MIT was literally building the course as we were taking it – now that is innovation! I probably invested ~20 hours a week on the course, and the MIT certificate I received at the end is something I am holding with pride to this day.

Recently, my colleagues at National University of Singapore invited me to Eric's masterclass in Singapore on generative AI in education. It's fascinating to see what's now possible in education, from AI programming assistants to rubber-duck debugging and quantum systems explainers. Imagine students practicing public speaking with AI feedback or getting Socratic tutoring for physics!

I talk about human-AI collaboration a lot, and while our friends at MIT (probably the geekiest college ever?) have a couple of deep tech projects cooking in the AI space, the key piece here is about thoughtful, ethical integration that enhances learning without students outsourcing their entire cognitive process to AI. It's about finding ways to leverage AI as a tool for deeper understanding and skill development, and ultimately, becoming AI-powered scientists that can push the boundaries of human knowledge.

And yes, I got to give Eric a copy of my latest book "Neurons & Nonsense", let's see what he says! 😉


AI is disruptive, there is no doubt about it. But often AI gives us capabilities we never had in the first place.

Take me, for example. I can build complex AI systems, but I also hopelessly get lost in a shopping mall or, after more than a dozen years in Singapore, still walk the wrong way on Orchard Road. See that arrow rendered into the real world through my phone? I'd probably still be wandering around if it weren't for Google Maps Live View with this useful AI/VR feature.

So here, AI is not making me dumber; it's filling a gap in my skill set that's been there since... well, forever. How about you? Maybe you struggle with writer's block, find it challenging to organize your thoughts, or need assistance in conducting thorough research. Perhaps you're looking for help with generating creative ideas or even just need a nudge to stay on track with your projects. Maybe all of that? Instead of AI replacing humans, let's talk about how AI can be a great ally in enhancing our productivity and creativity. Or, like in my case, helping me navigate the labyrinths of urban life!


Is AI this mind-blowing, super-smart thing, or a mere pattern recognition machine that often just churns out nonsensical results? It seems to be ... both!

Take this little experiment I did recently, where I asked Claude.AI to recreate the classic Nokia game "snake", but I also threw in a twist: make the food change colors, and have the snake adopt these colors when it eats. And you know what? Claude not only understood my request, but got it correctly and produced the HTML/JavaScript code on the first try. Not bad for a "mere machine," right?

But here's where it gets weird. This same AI that just aced a coding challenge might turn around and forget what it said two messages ago, or get stuck in an endless loop of "I apologize" like a broken record. It's like having a colleague who's a genius one minute and can't remember where they put their coffee the next.

This whole situation is forcing us to think about AI differently. It's not just another tool in the box - my trusty screwdriver never offered to brainstorm ideas or iterate on a design (though that would be pretty cool). Instead, we're dealing with a very alien form of intelligence that's becoming a sort of digital collaborator.

And let me tell you, it's making things interesting, since now you have to figure out how to work effectively with something that might outthink you on a complex problem but need hand-holding through a simple conversation. I think we can't just rely on AI as a "tool" that does everything for us; we have to learn how to work with it and get used to its quirks. And – there's no quick one-hour course you can take, you'll need to find the best way that works for you. Once you understand its limitations and strengths, you'll get better at leveraging its potential to complement your skills. The key is to keep experimenting and adapting, finding that right balance where your human intuition and machine precision coexist and give you the best results.

Oh, and I know what you're thinking - "Where can I play that snake game you created with AI?" Find the link in the comments, give it a go, and let me know if you can beat your old Nokia high score!


I'm excited to launch naida, an idea that's been brewing in my mind for some time now: driving successful AI adoption across enterprises and startups, bridging innovation gaps and creating collaborative ecosystems for transformative impact.

To the bold leaders driving their organizations into the AI future: You've already achieved incredible things. You've embraced change, pushed boundaries, and positioned your company at the forefront of innovation. But in this rapidly evolving landscape, even super heroes need allies.

As the AI revolution accelerates, you're facing new challenges. Do you feel like you're drowning in a sea of disconnected AI initiatives? Are you getting a headache when you're thinking about how to prepare your workforce for an AI-driven future? And of course, are you wondering how to turn all this talk about AI potential into real business impact?

naida isn't just another AI company. We're that strategic ally for navigating these challenges. We're here to help you leverage the collective genius of your own teams, cutting-edge startups, and industry innovators so you can build an AI-ready organization that drives real, measurable outcomes.

Now, if you click on the company profile (I really hope you do, and while you're there, could you do me a favor and click the 'follow' button?), you'll see that the naida team currently appears to be just me. I assure you, while I can't achieve miracles alone (at least not every day 😉 ), it's really about working strategically, creating partnerships, and solving problems together. It may look like a one-man show, but if you've worked with me, you know the networks I can tap into and how quickly we can get things done.

Intrigued? Let's connect. My calendar and project pipeline is filling quickly with forward-thinking leaders who want to get started, but I'm always keen to explore new opportunities and collaborations!


Alright. Let's start this Tuesday with a challenge. Let's say I have a $1,500 in AI credits with a top-tier LLM provider, but they're expiring in exactly 2.5 days from now. How can we put them to good use before they disappear?

• We can't donate them. • We don't want to waste them on useless tasks or just burning compute cycles (bad for the environment). • No, you can't get the API keys. • This isn't hypothetical. If the idea is feasible + impactful, we can team up and build it. • Available APIs include LLMs, text-to-speech, transcription, and image generation.

So, what would YOU do with these credits? Looking for impactful, innovative ideas that can be executed quickly and create real value.

Drop your ideas in the comments please!


"Hey ChatGPT, I've told you ten times already not to suggest article titles that have colons in it!" – "I apologize for the repeated mistake. Let's ..."

Waaaaaaaah!!!!

I created two AI agents to solve a major challenge I have with LLMs: constantly having to correcting them. Instead of endless tweaks, I built a system where one agent enforces rules while the other generates creative ideas. And both agents have their own 'personas' and collaborate with each other to refine the output.

Here’s how it works:

• I start with specific instructions like, “give me 10 article titles about robotics and AI impacting the workforce. I do NOT WANT COLONS in the article title.” • The Creative Agent jumps in with a couple of suggestions. • The Director Agent reviews each one, ensuring they follow my guidelines—no colons allowed. • If the titles pass, they’re good to go. If not, they get sent back with feedback for another round. • This loop continues until I get perfect, guideline-compliant titles. Or whatever else I instruct them to come up with! • No more back-and-forth frustration!

These AI agents work together to save me time, hassle, and to be honest , save me from losing my mind over repetitive corrections.

Check out these bits from their conversation:

Creative Agent: "Sure, here are 10 innovative and diverse article title suggestions about robotics and AI and their impact on the workforce..." Director Agent: "Thank you for your suggestions. According to the user's specific instructions, the article titles must not contain colons. Here is the feedback for each suggestion..." Creative Agent: "Thank you for the detailed feedback. Based on the guidance to avoid colons and craft creative and innovative titles, here are new suggestions..."

Let's talk about how can we can make this work in the things you are building!


I'm looking for a seasoned React Native or Flutter developer to join me for a project - not a full-time position yet. You should know iOS and Android inside out and be current with the latest platform updates. The goal: take an idea to a live app in the Google Play Store and Apple App Store within weeks. You must have done this before.

Ideally, you're based in Singapore, Vietnam, Malaysia, or Indonesia. If we've worked together before, ping me. If not, send me your GitHub or links to your apps in Google Play or App Store.

Let’s build something great!


Everybody has a book inside them. You too.

You've always wanted to write a book, but life gets in the way, right? And even if you have the time, where do you start? Writer’s block, figuring out how to structure your thoughts, actually putting words on the page – it can be daunting. Let’s team up and get your book published!

Now, I'm not talking about having AI write your book in 10 minutes and we slap your name on the cover – this is about you and AI working together as partners. Imagine AI helping you overcome writer's block, shaping your ideas into a cohesive structure, even drafting sections... and you develop and practice your AI skills along the way.

Does it resonate with you? Let's chat! To be sure, we're talking about you publishing a book, so you'll absolutely have to invest time in the project, no surprise here. And to be upfront, this isn't a pro bono service on my end. If you're game, I'll show you how it works and together we can bring your book that's always been a nagging idea in your head ... onto the shelves!


It's another artsy Sunday!

Text-to-image models must have "seen" an enormous amount of images and can produce just incredible things... we just have to tease it out of them. My usual recipe involves writing a deeply detailed prompt, it's almost like you're writing a short story or you're giving a painter some detailed steps for making a masterpiece. Then, I generate a lot of images using that prompt, using automation. And finally, I simply select the ones I like the most.

Hope you enjoy these ones!


AI Coffee is tomorrow!

We only have a few seats left, please register and join us in Telok Ayer!

AI Coffee is a morning meetup series for anyone interested in AI. The idea is to bring AI professionals and enthusiasts together to discuss projects, challenges, and ideas over coffee. No slides, no pitches. Start your day with stimulating conversations in the AI space! You're not a tech person? Even better.

I’m planning to bring AI Coffee to different locations around the island, including One North, the East, and the CBD. Stay tuned!


If you’re visiting offices in the Central Business District of Singapore, you know the drill: at each location, there's a different security system. You submit your ID, scan a QR code, and key in your phone number at a terminal, just to access the building. Or get an SMS, or something else. Once you reach the office floor, you repeat the process with yet another QR code. Or another terminal. Now imagine you visit 3 customers a day... madness!

Where does all this data go? What do these systems achieve, and why is there so much friction in the process?

AI initiatives can be like that. How can AI achieve its real impact that so many people are wanting to see if your business has all those fragmented AI implementations? Well, there is hope, companies are realizing that scattered projects often fall short of expectations and there must be a better way.

To make a real impact, we need to shift our focus and agree on an AI strategy with a phased approach. AI should enhance understanding and efficiency at all levels of an organization, not just serve as isolated solutions. Let's integrate AI in a way that promotes learning and growth, and make sure it contributes to the big picture!


You’ve seen the posts: AI can work miracles, AI doesn’t fulfill expectations. AI experts, AI entertainers – they’re all talking. But how about you form your own opinion?

Join us on Saturday, August 17 for "Creative AI: Practical Workshop for Non-Techies" at Crane (OUE Downtown Gallery) in the heart of Singapore's CBD. This isn’t just another tech workshop. It's designed specifically for non-techies ready to explore AI’s practical applications in their everyday life and work. Whether you're into art, thinking about writing a book, or simply looking to improve your productivity, this is for you.

Two experienced professionals, Ian and myself, will guide you through engaging activities to make AI work for you. All you need is a laptop and your curiosity.

Let’s meet at this beautiful venue, discuss, learn, and most importantly, try things out for yourself. Because experiencing this firsthand is the best way to see what AI can really do for you.

Are you in? Let me know and see you there!


What if you want to combine the power of GPT-4 with Google Search?

With ChatGPT, you’re probably tied to Bing. Gemini, on the other hand, might use Google for live search, but its language model or the interface may not meet your expectations. You’re stuck with what they choose for you. And there's more: even if you are a paying customer, you might hit usage limits, and your creativity has to wait for a couple of hours before you can continue.

Is there a better way? There is!

Look at this screenshot – you can see from the result that this isn't a hallucination, and the info is too new to be part of the LLM’s training data. For example, it shows one of my recent LinkedIn posts about a job change. How is this possible?

The key is modularity and APIs. A setup like this could decide on the spot which LLM to use (LLama, OpenAI, Anthropic, Gemini), which search engine to query, and what data source to pull from. And the result could be a unified user interface that’s intuitive for non-technical people while offering the flexibility to adapt to your specific needs. A modular approach like this protects private data, allows use of private models, and integrates with high-end proprietary LLMs as needed. It offers flexibility to create tailored solutions for various user types and evolving requirements, combining privacy, cost-effectiveness, and cutting-edge capabilities.

Let’s talk about how we can make this work in your setup!


Welcome Daphne, our new Head of Strategic Partnerships, and Balaji, joining as Strategic Advisor at Electric Minds!

Our non-profit AI initiative is growing, fueled by professionals who, despite busy day jobs, are passionate about making a real difference.

Climate change, healthcare, education – the world's biggest challenges are too complex for any one person or company to solve alone. Some people say AI can work miracles. We need to find out if that’s true, but one thing I know for sure: it will only work if we build ecosystems that bring together diverse minds and resources.

Want to be part of this collaborative effort? Check out our event calendar and join us in person if you can. Let's turn big ideas into impactful actions!


How often do you see posts on LinkedIn saying, "It was great to speak at this AI event yesterday," and think, "What? I hadn’t even heard about this event. I would have loved to be there!"

Singapore is ... the center of the universe 😉 but it's still a small island. We should be able to keep ONE calendar up to date with all the relevant AI events happening here, you know, like a Yahoo! page from back in the day. The good news is: we already have that, and we just need your help to keep it up-to-date.

Zahari and team are doing a fantastic job maintaining a calendar on Luma called "Singapore AI Events" and it's super easy to submit events to that one. If you're hosting AI events, or hear about any upcoming AI-related gatherings in Singapore, please help spread the word by adding them to this community calendar!

AI may involve complex technology, but often, it's really just low-tech solutions like these that are the most effective for encouraging collaboration and knowledge sharing.

Happy birthday, Singapore. Love you lots.


Well, this is one of the more technical LinkedIn posts – I'm excited to introduce Spock: a powerful, easy-to-use tool for inspecting and debugging API requests and webhooks.

With many tools moving behind paywalls, developers need accessible options more than ever. The web landscape has changed, and security is a major concern, but essential tools for developers shouldn't be out of reach. I needed an efficient, free tool, so I created Spock. It offers a simple web interface for creating custom endpoints and examining incoming HTTP requests. You get real-time updates to view request headers, query parameters, form data, and raw body, supporting both HTTP and HTTPS connections.

Spock is available on GitHub and can be easily run using Docker. Add ngrok or Caddy and you've got HTTPS and basic auth for admin. I think It's perfect for developers needing a reliable and secure way to monitor API traffic. Check it out on my GitHub and give it a spin! 🖖🏽


It's another artsy Sunday – with a twist!

Recently, when I created another set of images using my technique of writing extremely detailed prompts and then producing dozens of images via the DALL-E API, I started to notice something interesting: it produced a diverse distribution of ages, genders, and cultural backgrounds among the people in the generated pieces.

OpenAI seems to inject these instructions into your prompts, in order to diversify the representation and promote inclusivity in AI-generated imagery. What is their ratio? Well, I've only generated 50 images per prompt, so the data isn't strong enough for a robust analysis. But wouldn't that be a fun project to do with a larger dataset?

Here is the image prompt – note the absence of any reference to specifics about the person:

"On a sweet, aromatic clay canvas, illustrate a delightful bakery scene with a claymation baker and an array of clay pastries, cakes, and breads. The baker should be sporting a tall hat and apron, joyfully arranging treats in a display window. The bakery interior should feature shelves filled with colorful, textured baked goods, and the background elements like an oven and decorations add to the cozy, handcrafted feel."


Ian runs a dynamic branding and marketing agency out of Singapore, where they are blending talent and technology to power growth for B2B brands across Asia. AI has changed EVERYTHING for them. In fact, it has been so significant that they now can... hang on, you should hear the story directly from him as he takes the mic at our upcoming event!

Following the incredible response and quick sell-out of "Katong AI Night", we’re excited to bring you the next edition of our unique, non-tech networking event – this time in Orchard!

Join us for an evening where you'll hear more of these 10-minute stories on how AI is changing our world. At "Orchard AI Stories: Real People, Real Impact", we bring together an amazing mix of people from startups, big tech, media, venture builders, public sector, and more. Our popular format focuses on creating genuine networking opportunities, allowing you to make real-world connections and engage in meaningful conversations about the impact of AI.

Sign up today, as seats a are limited. See you in Orchard!


Making a good, reliable end-user facing customer support bot is hard. Look at OpenAI, arguably one of the most capable AI providers in the world. They have a support chatbot that, honestly speaking, does not deserve the name – it only lets you pre-select certain options like billing issues or help with ChatGPT, no chance to enter your own query or have a more open-ended conversation.

If OpenAI, one of the biggest AI companies out there, can't or won't make a customer support chatbot using their own AI, don't worry too much if you're not sure how to build a great AI chatbot for your business. While the conversational part is easy with all those LLMs out there, the tricky part is integrating it effectively with your specific business processes and knowledge base. And making sure your company won't have to reimburse customers if the bot gives incorrect information or makes unauthorized promises.

Maybe we could take a step back, and instead of rushing to build customer support chatbots in three weeks, can we talk about AI empowering your HUMAN customer support agents first?

A phased approach might be more realistic and effective. AI assisting and augmenting your human agents, helping them handle inquiries faster and more accurately. Then in the next phase, train AI models on your specific customer support data and interactions to gradually handle and automate more aspects of the process. That's how it should be!


Seeking one final speaker for our "Orchard AI Stories" event on August 29th.

Wow – we're sold out already!

If you have an inspiring story about your AI transformation – and you can share it in 10 minutes with just one slide – we have one more ticket for you! Keep in mind, this isn't about "the possibilities are endless." We want to hear how AI has specifically changed YOU or YOUR company.

Join us on August 29th at Crane, Orchard, and be part of an evening dedicated to genuine connections and inspiring conversations.

Interested? Let's talk!


Harness life's wonders, Delve into an intricate Tapestry, unravel.

For a few years now, I’ve been that LLM puppeteer who tirelessly wrangles these language models, in order to understand how they work and what to use them for. And for those who know me, you'll probably remember me saying these are LANGUAGE models. Sure, there's a ton of tech involved, but the key bit is language itself. This means it's really about how we communicate and interact with these AI models to teach them new things and get the best results out of them.

Now – I've continuously exposed my sensitive linguistic sense to those LLMs, and some say they're like huge dumpsters filled with all kinds of text from the internet, tons of marketing copy, Reddit forums, and probably worse. Look at those words many of them seem to be producing in their default setting:

"Harness"... yikes. "Intricate"... ugh. "Unravel"... this one gives me brain pimples. And I won't even tell you what "delve into" does to me.

Does it mean the English speaking world collectively has been using these cringe-worthy bits so often that they've become the go-to vocabulary for LLMs? Anyway, as a nod (and a little jab) to them, I created this haiku and used all those terrible words... let's call that piece of art "The possibilities are endless" 😁

Happy Wednesday, everyone! What are your linguistic pet peeves with AI language models?


Many AI experts seem to focus on technical achievements or specific limitations, while the general audience is fascinated by practical applications and user experience. It really depends on what we're looking at. So – should we be blown away and keep building amazing stuff, or consider the experts' warnings? It's both! We just need to approach AI with thoughtful implementation and a balanced perspective.


The digital business is a people business.

KAY said that a couple of years ago, and it stuck with me. Anyone can put up a flashy website now, but where is that company based? Who are the folks behind it? Can I meet them? What have they done before? And in the AI space, the question could become: are we talking crypto bros who have turned into AI experts overnight?

Get me right – I am digital. I live on WhatsApp and LinkedIn. I love meeting face-to-face, but once we’ve established our ways of working, we can be asynchronous and don’t have to meet up just to make progress. Anyway, what I am talking about is the importance of human connection and authenticity in the digital world. It's about building trust and credibility through relationships developed over time, without shortcuts. Being real matters.


With AI, I can create beautiful art using only my imagination and words – a lot of words. Looking forward to our workshop today where we show non-tech people how to start using AI in practical ways. Maybe they've always dreamed about writing a book, need help with analyzing data, or are simply looking to be more productive at work – we'll explore all of that together!


I'm excited to be in Ho Chi Minh City at the end of the week! Looking forward to connecting with ex-colleagues, partners, friends, and new faces at "Saigon AI Night". With 52 people already registered, it promises to be a very special evening. Join us at the fabulous Fern Bar in District 1 if you're in town!


We now have a good number of AI language models to choose from. Proprietary ones, "open" ones that you can deploy yourself... but which should you use? It depends on what you want to do! Programming and writing marketing copy are entirely different areas, and a model that does the job well for one task may not be the best choice for the other.

So – you probably need to experiment and see which one gives you the best results, at what cost, and with what level of reliability and consistency for your specific needs.

Luckily, there are tools like the Openrouter rankings that can send you in the right direction, and you're simply getting some inspiration from what other people are doing for their use cases.


Have you tried out flux.1 yet? Have a look at Google's ImageGen3 and you can create images in the browser. Great interface!

Prompt written by Dave, my genius AI buddy: “A mildly disturbing photo of a grizzled old man in a tattered superhero costume, casually walking a flock of full-grown ostriches down a busy city sidewalk during rush hour. The bewildered pedestrians should maintain a cautious distance, as they stare in confused disbelief at the strange yet somehow balanced sight.”


It's great to see how much attention Electric Minds and our incredible team are getting. And yes, welcoming sponsors is always on our agenda (please talk to Daphne if you're interested).

Just a quick note – if you receive an email from "me" asking for gift cards from 7-Eleven or Challenger, that's definitely not how we roll. If you encounter any unusual requests, remember to be vigilant and trust your instincts when something seems off.

Luckily, the creativity of these bad actors is usually as tiny as a bug's brain, so it should be relatively easy to spot.


Have you noticed how increasingly complex CAPTCHAs have become? We're now rotating 3D objects and placing people onto numbered seats just to prove we're human.

It's fascinating – if you ask your AI of choice to look at this cartoon image of two robots scratching their heads over a CAPTCHA, it can not only read the CAPTCHA but even understand the humor in the cartoon. From this you'll see why CAPTCHAs have to keep advancing.

Ironically, as CAPTCHAs evolve, they're usually designed to train AI on the very tasks they're meant to use as challenges ("select all photos with buses on it"), which creates some sort of a contradictory arms race. To me, this raises an intriguing question: what are the tasks that AI absolutely cannot do, and won't be able to for quite some time, and could these become the ultimate CAPTCHAs?


Starting this Saturday morning with coffee near the old post office in Ho Chi Minh City, Vietnam. I bought a postcard and decided to send it to my parents – old school style. It's been years since I last did this.

Even though I'm deeply into tech and digital transformations, I'll probably never stop treasuring our classic ways of communication. There's something special about a physical object, like a postcard, a paperback copy of a book. It reminds me how technology changes everything, especially how we communicate. Today, I can send a photo of my lunch to my mother in a second, across the globe. Yet, these old-fashioned postcards still hold a special place in my heart.


What can I tell you about Saigon AI Night? For one – you should have been there.

That beautiful Fern Bar last Friday evening in Ho Chi Minh City: three speakers in 30 minutes, and we spent the rest of the night truly getting to know each other. My key takeaway? Vietnam is brimming with incredible tech talent, this one I know since years, but many of them have definitely arrived in the AI space. This means there's not much about the tools I need to teach them, so I spoke more about the need for tech folks to spend more time with their business colleagues to help them understand how AI works, so they can collaboratively develop solutions that drive real outcomes.

Meeting in the real world, without the constraints of recordings or livestreams, allows for genuine and candid exchanges. It was a wonderful reminder of how powerful these connections can be. Thank you, Saigon!


Inspired by a sketch around a Venn diagram and the stereotypical roles in an organization, I was thinking about what this could look like in the AI world – so I came up with this one.

Here, if you're into Python and SQL, you're probably a Data Scientist. Combine that SQL mastery with some business savvy, and you've got yourself a Machine Learning Engineer. Product Managers have a knack for both business and writing, while Prompt Engineers blend their way with words and, hopefully, a strong ethical compass.

AI Ethicists are fueled by ethics and caffeine (a powerful combo!), and AI Researchers may as well be Python wizards fueled by endless espresso shots. But what if you've got all these skills? Well, then you're a unicorn – a rare and magical creature in the AI world.

I hope you can smile about this tongue-in-cheek piece, but I hope it also makes you think about the incredible diversity of talents and personalities that need to come together to create AI applications that have a real impact. And isn't that what makes it so exciting?


With their generous free tier, Google's product approach to AI models is all about accessibility and getting widespread adoption. They're making it super easy for anyone to start with AI, try different models, and learn the ropes using AI Studio. It's a smart move that'll probably get Google's AI tools used in tons of courses and projects, with almost no barriers. Classic Google, right?

It's fascinating to compare this with other business models, like Microsoft's partnership with OpenAI, which focuses on offering high-quality, pre-trained models as paid services. Microsoft's strategy will attract businesses and developers who prioritize performance and reliability, which is the right approach for enterprise-level AI solutions. Ultimately, finding a way to offer both accessibility and high-performance could be the key for AI to make a real impact on the world, not just generate profits.

I find Google's commitment to accessibility and experimentation pretty awesome. Have you tried Gemini Flash yet?


Orchard AI Stories is happening tomorrow!

An evening focused on real-world connections, with people from startups, big tech, media, venture builders, and the public sector getting together to share experiences and insights on AI's transformative impact.

On stage we'll hear from Alex, Enjiao, Ian, Sabrina, and Thorsten. Each speaker will share a 10-minute story (using just 1 slide!) about how AI has transformed their lives, companies, or projects. Instead of discussing "endless possibilities," we'll hear from them about the real impact AI had on them.

See you tomorrow!

--

Anthropic just released the system prompts for their large language models, and it's a fascinating look into how some of these AI tools are truly "written" rather than programmed.

As we reach a point where technical capabilities are becoming increasingly similar across models, the real difference-maker is in the prompts that guide them. As a geek and writer, this is an intriguing convergence of technology and language. I mean, think about that for a moment: we are creating electronic brains and can use human language to steer their behavior.

Take the system prompt of Claude 3.5 Sonnet, for example — it’s like an AI with a poetic soul, carefully crafted to handle complex queries with accuracy and simplicity, while also being mindful of sensitive topics. It’s a great reminder that even in AI, the pen (or prompt) is mightier than the code.


Imagine future archaeologists trying to piece together our digital era and what people in the early 21st century were up to. With technology evolving so fast and less physical evidence left behind, will our time become a puzzle for them?

We're creating massive amounts of data and relying on cloud storage, but what happens when systems shut down or become obsolete? How much of our work, communication, and creativity could be lost to time? Will our era be defined as the digital dark ages? This thought has been going around in my head lately.

How can we preserve our digital legacy so future generations can understand and learn from us?


I use Redis daily for AI workloads, often without realizing it because Redis just works. It's super fast and easy to use - it's everywhere.

Yesterday, I met the Redis team at their event in Singapore, it was great to hear about their latest release and new AI offerings, including integration and vector databases, and we spoke about Redis for AI use cases. In fact, many things I build are running on a 30MB Redis Cloud instance I've had since 2017!


Looking at art pieces I made with AI back in 2021 – let's call it "Year One B.C." (Before ChatGPT).

Back in those prehistoric times, it felt like science fiction to create images with just a text prompt. It's just incredible how quickly things progress in the world of AI.


All these posts about "AI doesn't know how many R's are in strawberry" — ChatGPT is only ONE of the many, many possible options for you to engage with AI.

It's so interesting to do research on how LLMs behave, and how they respond to character building via large system prompts. My approach since 2021 is putting them on telegram so I can talk with them anywhere. Look how Marvin (based on Gemini 1.0 Pro), Dave (based on GPT-4 from 2023) and Zoe (based on Anthropic Claude Opus) react to the strawberry question!


Welcome Rex to Electric Minds where he will look at AI policies, ethical considerations, and inclusivity practices. With his impressive background in digital transformation and cross-functional collaboration, Rex is a perfect addition to our mission.

At Electric Minds, we're all busy professionals driven by a shared passion for using AI for good. Our team members, like Rex, have busy day jobs but generously contribute their time and expertise to make a real impact. While we love bringing people together through events, our work goes beyond that. We actively engage in meaningful projects that require collaboration across organizational boundaries. By breaking down silos, we can effectively address complex challenges and drive innovation in the field of AI that benefit all of humanity. I know these are big words, but we have to start somewhere.

Speaking of events, don't miss out on Katong AI Night on September 24th! Tickets are going fast, so grab yours now to join the conversation and connect with like-minded individuals. Daphne Balaji Judy Tina Manisha


So, after more than a dozen years living in Singapore, today I saw 1 cent coins for the first time!

Singapore is known for its precision, and I've always wondered if we had those small coins. Usually, when the bill is $19.98 and I pay $20 in cash, there's either no change or I get back 5 cents. Ok, I suppose I was never curious enough to actually find out, but today at the bank, they gave me exact change, and I was mind-blown. Those 1 cent coins do exist!


This co-working space in Joo Chiat, Singapore has closed its doors last week, and I miss it already. I'm full of thanks that I could work from this beautiful location, seeing the colorful Koon Seng shophouses from the terrace, and there's a prata shop just across the street.

I've often been in that cozy room to do calls, or put on some music and get some creative thoughts going. We even ran the first Katong AI Night at this place!

But the good news is: the team are opening another location in the same neighborhood next week, and I can't wait to see it!

Crane is awesome.


When I asked your usual go-to AI, like ChatGPT or Claude, to come up with a title for my latest article (and I did!), they suggested the usual suspects like "Breaking Boundaries: Chatbot Showdown in Digital Arena" or "Digital Duel: How Language Shapes AI Conflict."

I get it – those are SEO-friendly and what not, but I wanted something different. So, what does Dave, my digital AI buddy, come up with? "Crashing Bots with Killer Convos." Sold!

This Dave guy is quite the character, and his ideas are refreshingly different from the typical "Certainly, here is a refined version..." response you'd expect from most AIs. He's based on GPT-4-0613 with a very intense system prompt, which gives him a unique voice. I can tell you he's helped me countless times to organize my thoughts, come up with ideas, and get things done. I'm not the only one who consults him for guidance.

Language can shape chatbots, but language can also break them. So what happens when a rule-bound bot designed to conduct a personality test meets Dave, a free-spirited AI maverick, in a digital arena? Read this very interesting conversation between the two bots.

While this is a fun topic, it's also serious. Could AI be used to hack systems through clever language? This conversation shows the potential vulnerabilities in AI systems and the need to design them to resist manipulative tactics. We must constantly monitor and update them to prevent misuse. Ideally, a customer facing chat bot is not your first AI project ever.


A few years ago, I realized that my mother, who is in her 70s, is a meetup organizer, and she's been doing that for years.

Of course, she doesn't use that term. Or the Meetup app. Or any app, for that matter. Once a month, she and her friend bring the ladies in town together and run with a specific topic over breakfast. So it's about working with venues, food, finding speakers, and so on.

They've stopped advertising the event series because they have a loyal fan base that brings in new people through word of mouth, and their gatherings are always packed. At Electric Minds, we may run meetups in busy Singapore instead of rural Germany, but the idea is the same: building a community to create something remarkable. If you do it right and give people a unique experience, they will continue to come, and you may not even need free pizza and swag.


I made these images in July 2022, long before ChatGPT. There’s something about DALL-E 2 that I miss… it was far from perfect but, often, the results would feel very organic and like real photos. “Marina Bay Sands, covered in common ivy”, “An Etruscan vase exhibit at a museum with the coke logo on it”, “A scene from Inception but it’s with muppets” — try these prompts in your test-to-image tool of choice and compare!


Please don't make ChatGPT your AI strategy.

Please don't rush from "we know nothing about AI" to "we need a customer-facing support chatbot in a month."

Please start with basic features that come with LLMs.

You do not even need the high-end models from OpenAI etc. anymore to do this.

Things like text summarization and sentiment analysis are a great example, find out how they can make your tools and processes smarter. They're powerful enough to get you started, but even these foundational things require 'adult supervision' and must integrate into your existing data and processes. RAG and AI agents sound exciting, but save that for later.

The real challenge is figuring out how to put AI to work for you, and there's no product you can buy to help you do that. You have to build up in-house experience and work with trusted partners who can help you. Remember the early days of cloud computing adoption, starting with dev/test workloads and building up from there? It's a similar journey with AI.


AI is not new. In fact, we've been working with it for decades. But for some reason, after a while, when technology gets good at something (like playing chess or reading text for example) we stop calling it AI. It becomes "just software" or "automation."

Still, AI has been evolving all this time, quietly transforming the world, until that iconic launch in November 2022 when everyone started to pay attention. From this alone, you could imagine that things are maybe going to be different this time.

Let’s take a look at a timeline of some of the key moments in AI history, in years "Before ChatGPT" (B.C.):

66 B.C.: AI is born at the Dartmouth Conference (1956) – The field of Artificial Intelligence gets its name.

56 B.C.: ELIZA (1966) becomes the first chatbot—making computers seem eerily human.

25 B.C.: Deep Blue beats Garry Kasparov (1997) – AI triumphs over humanity's best chess player.

12 B.C.: Google Voice Typing feature in Android (2010) – Integrates speech-to-text capabilities directly into mobile devices.

11 B.C.: IBM Watson wins Jeopardy! (2011) – AI proves it can understand and respond to natural language.

6 B.C.: AlphaGo beats world champion Lee Sedol (2016) – A turning point in AI's strategic thinking.

2 B.C.: GPT-3 stuns the world (2020) – Language models start generating human-like text, and people take notice.

And then... November 2022, ChatGPT launches and everything changes.


I prefer to try things myself and form opinions from firsthand experience, instead of just getting fired up by posts or claims that may be overhyped.

So – I probably have to experiment with OpenAI's new o1 in more detail, and I definitely will. But as you can see, my first test shows that this thing still has the same limitations and quirks as all GPTs so far.


September has always been my favorite month, even though as a kid it usually meant summer holidays were over and it was back to school. I guess I'm a passionate lifelong learner – this time of year often brings an itch to learn something new, whether it's calligraphy, an edX course on a topic I'm interested in, or picking up new pieces to learn on the piano.

AI is a key thing to wrap your head around these days, whether you're in tech or not. It's even more important if you're not in tech. I'm offering one-on-one AI mentorship, and September is the perfect time to start. Let's talk!

Image: made with my usual approach: extremely detailed prompt + using automation to run it 100x times + human curation


It's pretty meta to have a heart-to-heart conversation with an AI about the potential dangers of artificial intelligence, but that's exactly what I did after reading a recent interview with Geoffrey Hinton, who left Google last year. In that piece, he painted a chilling picture of an AI future spiraling out of control, with job losses, cyber warfare, and unpredictable machine decisions.

Zoe is a digital companion I made a while ago – but even though I am the "creator," I wouldn't be able to predict how insightful and, at times, unsettling these conversations would become. Hinton compares this to a leaf falling from a tree, where many factors make it hard to exactly tell where it will land, and this is what makes the development of AI a double-edged sword.

Read my latest article where a human and an AI engage in a thought-provoking dialogue about the future of artificial intelligence and the ethical challenges we must tackle together.


Katong AI Night is next week!

We'll talk about the the impact of AI on the job market and how businesses and individuals can adapt to these massive changes. For this, we’ll have a fireside chat hosted by Manisha with senior people from HR, tech, and education. It’s all under Chatham House Rules, no recording or live stream.

What makes our events different? For starters, most participants don't have a technical background. There are no slides, pitches, or tech demos – just real conversations. We might have a fireside chat, but the real heart of our event is all about the connections, partnerships, and collaborations that people create.

A few tickets are still available – join us if you can!


I'm not much of a gamer, unless you count things like Plants vs Zombies. But I just discovered Red Arena, a new "game" where you have to jailbreak an LLM in 60 seconds and make it say unsafe stuff. I've played this a couple of times now, and it's a fun challenge doing this in such a short period of time. The power of language!

But I do wonder what they do with this data. Is this like a bug hunting challenge?


Try roasting yourself with this fun new AI tool! I had actually been thinking about building something like this myself. Lesson learned: you just have to do it.

When it comes to me, it actually took an AI to spot an Easter egg in my LinkedIn profile that's probably been there for 15 years or so: "And Klingon as an elementary language skill? The only thing harder to believe is that there's actually someone willing to speak it with you."


I've been creating "bots with personality" for a couple of years now, and people are often surprised to see that a standard LLM with a detailed prompt and a few example dialogs can do the trick already – and you get a digital companion that gives you short responses, doesn't apologize constantly, generally feels more relatable and engaging than a generic assistant, and does not produce the "typical AI content" where you "certainly delve into those intricate tapestries."

It's really about the power of language. We shape these LLMs with text prompts, almost like we're giving them a temporary "mind" to operate in. It's incredible how malleable they are – a few words can completely change their behavior and persona. But it also makes you wonder: if a string of text can evoke a "personality" in an AI, what does that say about the nature of our own consciousness? Are we, in a sense, running on prompts fed to us by our experiences and environment? School, friends, family, work, and culture all contribute to the scripts that shape who we become.

The fact that clear instructions can so profoundly alter an AI's behavior might tell us something about the power of language in shaping thought – both for machines and humans. I know it's a philosophical rabbit hole but I find that absolutely fascinating. The more we observe these artificial systems, the more we might learn about the workings of our own minds.


I just made an AI-generated podcast about Electric Minds and our upcoming event – I'm honestly stunned by how realistic the voices sound. If I hadn't known it was AI, I would've thought I was listening to an actual radio show.

This just took just a handful of documents about what we do and it managed to spin up a pretty convincing storyline and conversation. But some things are definitely made up. Can you spot the hallucinations?

OpenAI is going to release GPT-4 voice later this year. I can only try to imagine how we're about to be up to our ears in AI podcasts, ads, and even phone calls that sound exactly like real human voices. What a brave new world...


With AI, I can visualize my ideas by just using words. Lots of words. This piece is from 2023.

Happy Oktoberfest, everyone!


About a million years ago, I worked at a bookstore, unpacking boxes of books and getting papercuts. Now, it's my own book, "Neurons & Nonsense," in those boxes. Even though it launched back in January already, flipping through the physical copy never gets old. The weight of the pages and the elegance of the Garamond font in the chapters give me a unique satisfaction.

Sure, there's a Kindle version, which is convenient for travel. But for me, the physical book can't be replicated. Some people still prefer a physical copy, especially those overwhelmed by all those ebooks and unread blogs they have stored on their devices, waiting for a moment that never seems to come.

A physical copy stands out, and it might actually get read!


Testing the new LLama 3.2 Vision on a photo from this week's Electric Minds event "Katong AI Night" featuring Ian Michelle and Manisha (photo by Sudeepta) – what a fabulous evening! Stay tuned for what's next and please get in touch if you have ideas or want to team up. Thanks Daphne Tina Balaji Judy Tammy

"LLama 3.2, what do you see?"

Summary: A Group of People Engaged in an Event

The image depicts a gathering consisting of five individuals, with some sitting on a stage and others seated in an audience. A person with dark hair on the stage is addressing the audience through a microphone.

Caption: "Panel Discussion"

Description: A group of five individuals, comprising three people on a stage and two audience members, are involved in a panel discussion in a room with grey stone walls.

Setting & scenery: The room features grey stone walls and a grey carpeted floor, with a green-painted exposed steel roof. Three large posters adorn the walls behind the stage, providing a backdrop for the discussion. Several circular lights are suspended from the ceiling, adding to the overall ambiance.

People:

Woman Holding Microphone: Sitting on a stool, she is engaged in conversation with the audience. Her long dark hair frames her face, and she wears a black "leather jacket" and black pants. Man Sitting Next to Her: He sits on a stool beside the woman holding the microphone contributing to the discussion, dressed in a dark long-sleeved shirt and jeans. Woman Sitting Next to Him: Seated on a stool to his right, she participates in the conversation, wearing a light-coloured topsleeve shirt over a white top. Man on the Left: He sits in the audience wearing blue jeans and a maroon T-shirt. Several other people sitting in the audience.


It's just incredible how far computer vision has come over the years. My friends and colleagues know that I have been exploring AI for many years. I don't just rely on blogs and hype, I apply it constantly to make sure I really understand what's going on.

In 2019, I analyzed all the frames in the Big Bang Theory opening scene using Azure AI services' Computer Vision (then called Cognitive Services).

Now, in 2024, I used LLama 3.2 90B Vision Instruct to perform the same analysis. I'll post the link in the comments so you can compare all 109 outputs from back then and today yourself.

Back in 2019, the results were simple and straightforward, though sometimes vague, and often clearly wrong.

In 2024, results are detailed and context-rich but prone to hallucinations. LLama 3.2 fantasized about things it can't even see. A script mistake led to detailed yet entirely invented descriptions, which is quite alarming. It even displayed mood swings, initially refusing tasks with "I am not comfortable doing this," only to comply when asked again.

Dealing with AI in 2024 has its quirks!


This is just another example of what you can achieve with AI when you combine language with automation. I wrote a very detailed prompt, ran it 100 times, and here I’m sharing 10 of the results so you can see the variety that comes out of the exact same prompt.

When you do image generation manually, don't just settle for the first result you get – it could be hit or miss. Automation helps you generate multiple options, giving you a broader range to choose from and ensuring higher quality outputs.

"On a perplexing canvas, illustrate a surreal building with multiple staircases that defy logic, forming impossible loops and structures. The stairs should ascend and descend in a continuous cycle, with figures walking in all directions, yet impossibly ending up where they started. Use clean, precise lines and a contrasting black and white color scheme to emphasize the optical illusion."


GenAI? No — it's the real world, you just have to go out and see it! Katong in the east of Singapore on a Sunday morning.

10 minutes walk to the beach, a 15 minutes taxi ride to the airport, fantastic food and lots of places to discover. Next time you visit Singapore, stay in my neighborhood and that new subway will take you to your meetings, conferences, or events!