Danny Hearn – Deeply Human Design Ltd

How I built an AI chatbot web app from scratch.. using AI

Writing this feels a bit odd, knowing there’s now a chatbot on my site that could tell you the same story. But here goes. I’m Danny, a UX consultant who’s done a bit of everything, and if I’m honest, that “bit of everything” is exactly why I wanted to build my own AI assistant.

My chat AI web application

The final result

In 7 days I built a fully functioning web chat application that has a range of features :

  • Front-end chat-like interface with interactions
  • In chat cards to surface and highlight content
  • Dynamic formatting to space out content
  • Scraper scripts to build the database off content from a range of sources
  • A chat response that is fine tuned to respond in my style, tone of voice

Why build an AI chatbot?

To be clear, I’m not a developer. The last time I deployed anything to the web, ASP and Flash were the height of sophistication. GitHub, Node.js, and the terminal felt like mysterious tools for “real” tech people. My world is much more Post-it notes and Figma boards than code editors and hosting dashboards.

My (modest) technical starting point

Whenever someone asks what I do, I fumble around. My CV is more like a patchwork quilt than a straight line. I’ve led teams, run design sprints, worked with charities, retailers, and all sorts in between. Trying to sum that up in a neat sentence is impossible. I wanted something that could answer, in plain English, “What’s Danny’s deal?” and actually pull in real examples, projects, testimonials—the kind of stuff that matters if you’re thinking of working with me.


So I set myself the challenge:
Could I build a chatbot that lives on my website and gives true-to-me answers? No generic waffle, just real content from my own work. Bonus points if it also teaches me a thing or two about making web app’s with AI along the way!

First Steps: asking AI for help

Intially I did what everyone does now and asked ChatGPT for a hand. I found it was quite easy to setup your own ‘GPT’.  I referenced webpages, and setup a basic prompt.  However I soon saw the limitations.. I couldn’t embed it into my website. people would have to have a chatGPT account, the responses were hard to refine, and certainly couldn’t emebed ‘cards’ in the chat.  Something i’ve always favoured as evolving chat into something more visual that moves it on from a simple text format.

My first GPT

The next iteration

So after a bit of chatGPTing, I understood there’s a way to build an openaAI assistant API.  This would allow me to ping the chatbot with a question from my website and pull its reponse back in.. A proper chatbot. 

First I needed to fill it with content and knowledge. I thought I’d just point it at my website and let it figure it out like I had done on the chatGPT bot. I quickly learned it doesn’t work that way. The assistant doesn’t just browse and understand your site. I’d need to format data for my assistant to use, as this was a more slicker operation.

I asked ChatGPT to help me write help me build a way to scrape my own website content. This worked, almost immediately with a Python script, but only up to a point. I ended up with big blobs of text, but the AI struggled didn’t always use the data from the text files to answer in a way that felt tailored or useful.  It was also making things up…

I understood that the way the SLM’s prefer it is with data broken into small readable “chunks”—so it could pull specific testimonials, case studies, or details about me, rather than just repeat back random paragraphs.

After some back and forth, and a lot of prodding, I got ChatGPT to help me convert my site into structured JSON files. I ended up with separate little databases for testimonials, case studies, and general content, and I fed those into the assistant.

Filling the Assistant with chunked data

Creating a polished front-end

Now I would need a front-end something that spits this all into a nice looking front-end I can put on my website.  I asked chatGPT to write me some code to do this.  It created a blend of HTML / CSS / and JS into one file.  However this quickly became risky as I new prompts and tweaks would wipe out what was prevsiouly done.  The workflow was messy.  I had to copy and paste the generated code back and forth from prompt outputs to a html page.  This was taking way too long.  Why couldn’t chatGPT just make the edits for me?

Enter Cursor, which can read, edit and do lots of wonderful things directly to the files on my machine.

After 2-3 prompts in Cursor, I saw it spin up a front end which immediately connected to my openAI assistant.  It even responded with the right answer, using a real example from my work all rendered in a fairly convincing way on a webpage. I had a genuine “oh wow” moment. The sort of thing I’d have paid an expert a few days’ work for was suddenly happening in front of me.

From early wins to painful lessons

The first 40% came together fast. Suddenly, I had an AI that could talk about my work and pull in relevant content. But I quickly ran into the problems everyone hears about with AI. Sometimes it just made things up. Other times, it mashed up different projects or gave answers that sounded right, but were completely off.

That’s when I stumbled into the world of “RAG”—retrieval augmented generation. Basically, it means the AI has to check its answer against real data, rather than just making a best guess. Implementing RAG meant fine tuning my prompt to get the assistant to use structured responses, (and using JSON objects) and updating the code to expect this new format.  Something that would of meant starting everything from scratch but was 60-80% done with 2-3 prompts.


Fine tuning the prompt

But there was another layer of learning: prompts. I’d always thought a prompt was just a question you type in. Turns out, the way you structure a prompt—being really specific and explicit—makes all the difference. I started with vague, half-baked prompts, and watched the bot give me vague, half-baked answers. As I got more explicit, and set clear boundaries (“only answer from this data,” “always include a source link”), the responses improved dramatically.

Hitting the wall (again and again)

The next stage was trying to get it all working on my actual website, as it this stage it was only functioning on my website.

Visual bugs crept in. The chatbot looked fine on my computer, but on my site, styles would clash with WordPress and Elementor. Sometimes the chat window just disappeared. I’d fix one thing, only to break another. At one point I spent hours hunting for a stray border-radius property.


Worse were the environmental headaches.

The chatbot ran fine on my laptop, but wouldn’t work at all on my shared hosting with Krystal. I’d upload the code and get nothing but cryptic errors, or it would serve up HTML instead of JSON. Cursor—the AI-powered code editor I used—was a lifesaver locally, but it couldn’t troubleshoot a web hosting environment it couldn’t see. I spent whole evenings copying error logs back and forth, getting nowhere.

The first part of the build was genuinely quick. That last 20%, the fiddly bits, fixing edge cases, fighting deployment gremlins was… exhausting. I even got stuck for 3 days due to one line of code confusing the hosting server.

Chat response with no card.. A common issue for me

It’s all in the prompt!

When I got in a jam and really stuck, the way out I found again was thoughtful prompts.  When the app wouldn’t work on my hosting site after a day and a half of pummelling cursor and getting nowhere, I tried a new approach.

I used the ‘deep research’ in chatgpt, loaded it up with all the files from the project, referred it to knowledge bases on my hosting websites support pages.  I asked chatGPT to scour socail media and forums for people with similar problems.  After a few minutes it had cracked it and gave me the 2-3 lines of code that unlocked the whole issue.  This was another painful but useful lesson.

Building a live reporting dashboard

One thing I wanted from the start was to learn from how people use the chatbot. So I built a live reporting dashboard that tracks what people type in, which questions are most common, and how the bot responds. It’s all anonymous and privacy-friendly, but it’s already shown me which questions come up again and again (“What’s Danny’s experience?” wins by a mile). This helps me keep improving the bot, and tells me what clients are actually interested in.

Dynamic reporting dashboard

The Human side of AI

What I’m proudest of is that this doesn’t feel generic. The bot isn’t just a cookie-cutter Q&A box—it talks in my voice (ish), references my actual work, and gives a real sense of how I think. I even got it to surface quick links to projects and testimonials, so people can see what I’ve done, not just read some marketing fluff.

I’m now much more comfortable with concepts I’d never even heard of ten days ago. RAG, prompt engineering, structured JSON, deploying Node.js apps on Krystal, using Cursor as my development buddy—none of it was easy, but all of it felt achievable once I started.

My aim is to feed it with more context so it can improve and continue to evolve the tone and style as well as knowledge base.

The result: something real

The assistant is live on my site, recording real interactions and actually helping people find out about me and my work. It’s not perfect (nothing ever is), but it works. And if you’d told me I’d get this far, this quickly, I wouldn’t have believed you.

Give it ago!

You can try out the chatbot here. I’d genuinely love feedback! What works, what doesn’t, and what would make it even more helpful. If you want help building your own, or just want to swap war stories about AI projects, just get in touch.

If I can do this, with a patchy developer background and a lot of curiosity, so can you.

View AI chatbot