issue dated April 26, 2024 Bangalore, India

The definite, Illustrated & annotated anthology of my varied pastimes

  1. Home

AI Policy

Updated on February 3, 2024

Opinions and beliefs are complex. If you want to read just my summarized feelings about the use of AI in my creative practice, it is here. I do think that there’s a lot more nuance this warrants and hence the rest of the 800 words after it.


TLDR;

This year, I’ve found it necessary to develop my own set of guidelines to dictate how I engage with these technologies. I’m writing this not only as a disclaimer for those who consume my content—a sort of “nutritional label,” if you will—but also as a self-imposed framework to hold myself accountable in making a good-faith effort to resolve the tension between potential and pitfalls.

I will:

  1. Try things out. I want to experiment, I want to learn, I want to explore. Whether it is Midjourney, DALL-E, ChatGPT; I want to keep myself in touch with what is happening. I’ll continue to learn, explore, and find meaningful applications for these tools in my work.
  2. Treat tools as tools. I see this tech as extensions of my creativity, not replacements for it. When you engage with my work, know that what you’re experiencing is not just a random output from an algorithm—it’s a product of my deliberate intent.
  3. Limit reliance on generated content. I commit to ensuring that generated content—be it text, code, video, or audio—will not constitute more than 30% of any piece I create. My work will remain predominantly an expression of my own creativity. Naturally, this is impossible to quantify, so I intend for this guideline to serve as a philosophical touchstone rather than a rigid rule.
  4. Fully disclose when AI or any other form of generated content is used, and clearly indicate this in the work or accompanying documentation.
    1. For text content, I will indicate which portions of my work, if largely unmodified by me, are AI-generated.
    2. For code snippets, unless the generated code is a common solution to a well-known problem (e.g., sorting an array), I will include comments to indicate that the logic or solution was generated with an AI tool.

I will not:

  1. Use any AI tool to plagiarize work that mimics the style or substance of other artists without proper attribution and consent.
  2. Use unlicensed or generated artwork. I used this in ‘Dance of Dependence’ because Dall-E was new to me, I have learnt better since then. Going forward, all artwork will either be commissioned from 100% real human artists or sourced from public domain collections.
  3. Be ignorant: I will not turn a blind eye to the ethical implications of the technologies I use and continuously update my ethical guidelines as the landscape evolves.

Disclosure on ChatGPT usage

Currently, I maintain a subscription to ChatGPT Plus. I primarily use it for debugging and troubleshooting code. When I encounter issues or roadblocks, it sometimes suggests alternative approaches that I might not have considered.

Beyond its technical utility, ChatGPT Plus also acts as a creative sounding board for me. When I’m conceptualizing new projects or features, I use it to brainstorm and validate ideas. It serves as an initial platform where I can freely “bounce off” thoughts, helping me shape those preliminary ideas into more fully realized concepts that are my own, and from that point on do not rely on a generative model.


As of late September 2023, when I write this, the technological landscape has undergone shifts that I could only dream of a year ago. Back then, OpenAI’s GPT-3 was the pinnacle of language learning models accessible to me, and DALL-E was still out of reach since I was waitlisted. Everything looked so promising and it had taken a long time (even within my admittedly short frame of reference) for things to reach this point.

I was in 8th grade when I first read about Google’s DeepDream — who wouldn’t be crazed about the fact that computers were trying to “see” things? By 2018, almost three years later, I had finally scraped together the minimum brain cells required to experiment with it on Google Colab. And when models like BERT came along, I looked on from the sidelines as people shared interesting things that they were doing with them. Of course, I couldn’t understand most of the techincal stuff but it was incredibly fun to see what people were making. Things always seemed like they were going somewhere but a keyboard-equipped monkey like me couldn’t see where. It was just an exciting time to alive in, even if all I was doing was dipping my toes into the water.

So naturally, the release of GPT-3 was a BIG moment for me. For the first time, these complex algorithms weren’t just theoretical constructs or tools that barely functioned after 4 hours of set-up; they were accessible to anyone who wanted them all through a simple text field and powerful. Needless to say, I lost my mind. ”This Studio Looks at Bellybuttons” was what I blew my $20 Open AI credits on. Looking back, that era now seems almost quaint. The internet didn’t yet feel polluted. Every kind of media conceivable has an AI suffixed counterpart and more is added everyday. One would think that after years of waiting for this technology to evolve and reach this point of accessibility and usability, I would be happy that I have access to it so easily. I am not. My overwhelming sense of jadedness, shared by many, stems from a few key reasons.

As an artist, I have realized that agency over my work is non-negotiable. Whether I’m fine-tuning a photograph in Lightroom, modelling a landscape in Blender, or writing these words, what I want is a direct channel between my thoughts and what you perceive my work to be. I want intention. I want to be able to step back, tilt my head, and evaluate my work, not as a task to be completed, but as something I made and continue with its endless refinement. Creative work isn’t a chore to me; they’re processes I initiate and follow through. I do not want to yield that control to a blender which throws up the best probabilistic output to a prompt.I don’t want to prompt, I want to do.

As an artist, I also take issue with how this tech has been manufactured. It seems wild to me that with enough VC funding, I can plunder whatever the hell I want from the internet, co-opt labour and intellectual property of others without consent and create something that cannibalizes the practice and hard work of the very same people. This is just one ethically murky facet and there are more, such as bias in AI models, or the environmental impact of training large models.

Lastly, I detest the techbrofication of creative spaces and the sickening brand of discourse which is common with AI evangelists. Legitimate criticisms are frequently dismissed with clichés like ‘NGMI,’ ‘evolve or die,’ or ‘cope and seethe’, and sometimes, amusingly, “Art is finally accessible and without barriers now”. Art always was, but when I hear this, it becomes clear that such folks don’t want to create art, they want to create content.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as the only possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.https://time.com/6302761/ai-risks-autonomy/

The dismissive approach to artists’ hard work and creative efforts is alarming. The worldview that this is the future of all art and creativity is stupid. This type of arrogance is reminiscent of attitudes seen during the ‘Web3’ and NFT boom, often from similar circles. Having witnessed this pattern before, I find it neither engaging nor respectable.

But I am not black or white. My journey from an eager 8th grader experimenting with Google’s DeepDream to a discerning artist in a rapidly evolving technological landscape underscores the duality of my relationship with AI.

I continue to be driven by a desire to explore and innovate. Recognizing the flaws in something doesn’t mean you can’t engage with it; it means you engage with it critically. I think it’s entirely feasible to hold these opinions and still be excited about the technology. In all my life so far, I have wanted nothing more than to explore, experiment and tinker with whatever came my way and I don’t want to deny myself that pleasure. These opinions doesn’t indicate a wholesale rejection of AI or tech innovations; rather, it’s simply a call (and reminder to myself) for ethical and thoughtful implementation. It’s not a zero-sum game; and these two belief systems can co-exist.