AI models have definitely come a long way since they stepped more prominently into the public eye a couple of years ago. For a technology that’s actually a bit over half a century old (not joking), one could argue that access to IoT systems, along with vast amounts of user data, has made the recent wave of advancement feel almost relentless.

It really makes you reflect on the nature of tech itself—and what happens when the right conditions align for a technology to suddenly become a major part of our daily lives. However, as some experts like Knut Ramstad (CTO at Nordic Office of Architecture) have noted, we needed to move past the initial hype in order to start seeing the real, productive value in some of AI’s more advanced features. Important remarks that were made during the BIM Summit organized by Symetri in Oslo (April 2025).

Interestingly enough, I actually owe it to one of my students that I started exploring a more practical application of the tech in the realm of architectural illustration. Like most architects, I enjoy participating in concept competitions. More often than not, that means long hours, no payment, and just the chance to make something interesting when inspiration strikes.

As with many solo practitioners, this usually means there’s more than just a gap left behind—there are always aspects of the project we wish had been better.

So, driven by curiosity, I decided to explore a hands-on use of AI. The following is a concept generated for a space station. At the time, I used GIMP to add a few shades of figures and plant life to a proposed hydroponic area, just like I’ve done since the good old days. Not that bad of a job but I would certainly have loved some astronauts there instead of normal people. Plants were ok I suppose.

Let’s take a look at how AI might have handled the task (ChatGPT V4). Let’s start by taking a look at the deliverable:

Figure: The final illustration submitted to the competition.

Now, what happens if I take the original illustration render (before post-processing) and use the prompt:

Prompt: “This is a render for a conceptual space station that I developed a while ago. I was wondering if you could add some figures of astronauts in order in the illustration. Do not change my original picture

Figure: Final render before post-processing.

Figure: The first suggestion by ChatGPT after the prompt.

Already a big improvement and certainly a lot less time than the one I took to produce it. I am however missing the hydroponic area. Let’s follow up:

Prompt: “This looks fantastic GP, in the center of this block there are hydroponics. Can you add try and add them?

Which lead to the next picture, which might actually have looked fantastic as a submission:

Figure: Illustration including the hydroponic elements.

Lastly I wondered how some lights might fit in:

Prompt: “All sorts of amazing. How about some recessed lights on the roof? And it would be fun to see at least one dog astronaut. That reminds you of Cosmo from guardians of the galaxy (to give an example)

Figure: Final illustration of the exercise.

As you can see, though, in the last one there’s already a noticeable drop in quality—and I definitely didn’t ask for that “yellow-ish” effect. But to be fair, that’s cherry-picking, especially considering I was able to produce a lot more in significantly less time.

You can’t help but marvel at the possibilities this brings (along with the fair amount of illustration jobs it puts at risk)—but let’s save that discussion for another time.

Share what you think!

Trending