N H A F

Process of Discernment

Essays · By Louka · December 26, 2025


The NHAF supports artists who have had their livelihood disrupted from the recent achievements in artificial intelligence. Naturally, we possess an implied sympathy with their grievances towards AI. Our non-reverence of generative artificial intelligence is also mentioned in October’s article, Launch:

An unwritten but obvious rule given the purpose of the NHAF is that pictures created from generative artificial intelligence and automated creative tooling are not applicable and will not be considered great art, even if they can look aesthetic and have a purpose.

I suppose it would be useful to formally pen down the organization’s perspective on artificial intelligence, so that there can be no mistakes as to what we support and what we do not support.

Discerning between artificial intelligences

So that we can commit to an opinion on artificial intelligence, we must understand what it is. Most people do not wield a complete understanding of what AI is, and it shows in how some uncritically reject all forms of artificial intelligence, even those that have been developed prior to the AI spring that has been responsible for much of the upheaval today.

Artificial intelligence alone, without adjectives or specialization, is not bad1. We have welcomed progress in the field of artificial intelligence because it can be such an helpful tool in resolving the long-standing problems that exist out of our limited cognition. Medicine has been one of the first appreciators of artificial intelligence back in the 2010s when AI has demonstrated its potential in diagnosing cancers that would have otherwise been missed by oncologists. Likewise, AI has been been highly appraised for its transformational contributions to protein folding through AlphaFold in 2018.

These applications of artificial intelligence are valuable because they assist human beings. Likewise, any technological invention that assists human beings is worthy of appreciation and legitimization. Some of the largest investors in artificial intelligence call the ongoing worry about AI unfounded based on what they call “historical examples” of technologies improving the world that, at the time, were deemed dangerous due to their impact on careers. This is true, but these examples have spawned in as technologies that were either assistive additions to otherwise human workflows or as replacements for existing assitive technologies.

Think of, for example, the replacement of the horse with the car. It is true that historically, some have complained that the replacement of the horse with the car will lead to the discontinuation of the stable hand, a previously valued ancestral discipline. The world has lived through this, having been made better since; an assistive technology has been made better with another assistive technology. In the pursuit of assistance, the assistive career that was being a stable hand has transformed into the career of the mechanic. In truth, the job hasn’t really disappeared, it has simply evolved into something else as the assistance is still necessary, and therefore you need someone to maintain what provides it.

Artificial intelligence, in an assistive capacity, is good, and it can still gain this respect and inherit this historical consideration. However, we’ve seen deployments of artificial intelligence that goes beyond the assistive capacity. Many companies have already replaced workers with AI agents, making their human predecessors redundant. There is still a lot of flirting with the idea of making programmers redundant. Of course, this was to be expected, on par with the course many companies had already undertaken, with the phasing out of the human element everywhere it could, like the replacement of the cashier with the screen, the gradual insertion of robots in the warehouse, and so on.

A great gaslighting unfolding right before our eyes is the claim that all applications of artificial intelligence right now is for the assistive capacity. In the programming world, many are lying to themselves claiming that the future is the engineer-manager that supervises agents entrusted with writing the code, instead of the traditional case of humans writing code. The more certain future, which many try to ignore, is that the engineer-manager role itself will be replaced with an agent that has simply more permissions, and that automaton disease will spread upwards until the higher level managers or perhaps the executives themselves. There will simply not be a point where the automatization of work stops out of some conjured-up technical need; it will stop only at the very role reserved for the purpose of receiving money.

Yes, there is a lot of wrong. It is understandable how much of a pejorative the term AI is now. It is however important that the NHAF does not apply a blanket rejection of all artificial intelligence in its communications, at the risk of labeling otherwise good applications of AI as possessing the same nature as the bad applications of AI. To this purpose, it may sound tempting to specifically treat generative artificial intelligence as the negative, loud application of AI, but that isn’t right either.

Generative artificial intelligence: discerning the good…

Some share the opinion of the NHAF that artificial intelligence can be good, but that it is currently being used for bad, mainly in the form of generative artificial intelligence. Yes, it is true that mostly all of the bad uses of AI are generative, but it is intellectually dishonest to equate generative AI with its evil use. Generative AI hasn’t materialized its copyright violation on its own; scraping art from the Internet isn’t part of the transformer formula.

I agree that it sounds disingenuous to make this distinction when most forms of generative AI are bad. I’m not here to tell you that you need to change your wording to specifically account for the few instances of generative AI that are good. I am, however, advocating for the use of discernment when you see someone talk about their use of AI, because there has been an uncomfortable number of instances where someone has engineered their own locally-trained models and have received the same treatment that is given to commercial generative AI. The person that runs their own model, based on their own data, is not inflicting any of the damage that commercial AI is doing.

Moreover, another kind of discernment must also be used when talking to people who are using commercial generative AI. Users of commercial generative AI are not unimpeachable, but when it comes to ethics, they have the unimpeachability of the car driver, but put on its head; the damage that they cause is mostly found in their financial support of the damage, not their direct participation in the damage. If you equate their use of commercial generative AI to the actual consequences of generative AI (e.g., copyright violations), then you are devaluing your opposition. If you want to criticize the use of commercial generative AI, you need to do it carefully.

The NHAF is all about economically supporting artists that have been made redundant, so let’s discuss this aspect more specifically. It is too easy to immediately look at an AI-generated picture and assume that an artist could have made this picture, meaning that the existence of the picture alone has made an artist’s work redundant. When someone uses commercial generative AI to create a picture, whether they infringe the implied ethical standard lies solely in the purpose of the picture; are they using it as part of a company’s art pipeline, or are they using it as an avatar, a wallpaper, a piece of furniture2 art?

Unless you’re one of the few that supported Marques Brownlee’s Panels, you are most likely not paying for your wallpaper. It is also equally unlikely that you have commissioned somebody to make one for you. Most people find whatever appears first in Google Images, or sometimes on specialized websites that, themselves, aggregate what can be found online, for free. While you can hold a debate on whether the process of generating a wallpaper is ridiculously overkill, especially with the energy consumption involved, it is difficult to rationally state that somebody generating an AI-made wallpaper for their own use is committing a terrible ethical infringement. Regardless of the angle involved, whether that is the question of copyright, the question of the replacement of the artist, the question of the veracity of the art, they will find their equivalent in the person freeloading a wallpaper or a profile picture from Google Images.

Therefore, for the purpose of consistency, you can either raise your ethical standards for the Google Images-using person, subjecting them to the same criticism that has been applied to those using generative AI for their personal use, or you can lower the ethic through the realization that generative AI users are—process of obtention notwithstanding—doing the same thing as someone getting a wallpaper from Google Images.

…from the bad.

Users of commercial generative AI, even for personal use, are not unimpeachable, even if the matter that which you can impeach them through isn’t as broad as previously imagined. I do not believe in the erroneous ideological disease that the use of a product inexorably entails personal support of its authors, or the philosophies and policies of its decisionmakers. However, impersonal support is generated nonetheless as an indirect process of using the product, not always as a conscious choice but simply a consequence.

Generative artificial intelligence is known for consuming enormous amounts of power. Of course, like all things, there have been vast exaggerations, but the relationship between generative AI and power generation is undeniable in the face of the reopening of nuclear power plants, the trillion-dollar deals, and the shifting of public policy to accomodate datacenters. Each person using commercial generative AI is contributing, even if only a tiny amount, to this mess, whether or not they are directly paying for it.

Does that mean, however, that they should carry guilt for this? Certainly not! I have related above the nature of the car driver to the nature of the generative AI user; it can be admitted that using a car carries certain ethical implications that have strong parallels with the use of generative AI. Yet, I wouldn’t blame the car driver for using the car, and I wouldn’t blame the generative AI user for using generative AI. They both carry conveniences that, when subtracted, leave a hole that is hard to fill in one’s productive life. At the same time, those who desist from the use of either for a better venue, whether that be public transport in the case of the car driver or the commissioning of artists in the case of the generative AI user, should be commended for doing so.

It would be wise, as a society, to treat the use of generative artificial intelligence as something outmoded. Yes, even if it has been a few years since it has grown mainstream. I am speaking of the way how we educate our children on cars, focusing instead on teaching them how to use public transport; how we also speak of recycling as opposed to trashing everything. We should treat generative artificial intelligence as something you start using when you choose the easy way, the more irresponsible way, but this time more as a contribution to yourself than a contribution to society, since refraining from the use of AI carries benefits for yourself and your cognition, mainly.

Skeptic interaction

I would argue that there is a larger symptom that accompanies the viral spread of generative artificial intelligence, and that is how we interface with that which it can imitate—words, sounds, and images. Those informed enough about AI can no longer interact with words, sounds, and images without filtering them through a cognitive process that determines whether a person was skillfully involved in its creation. This has consequences for both the person supportive of AI and the person rejecting of AI, as they have to do it nonetheless.

This is a mental load which nobody should carry. It is ridiculous that we have to judge the veracity of AI before determing how to interact with it. How many of us have liked an image only to discover later that it was made with generative artificial intelligence, robbing us of our heartfelt appreciation? How many artists have been wrongfully accused of generative artificial intelligence because it turns out that their expression of the medium resembles too closely the computer-made? Will it become necessary for artists to purposefully insert quirks or alterations to what they make so that they can be cognitively differentiated from the computer-made? And how long until the cat-and-mouse catches on, with newer and better models becoming capable of inserting these quirks in return, forcing artists to come up with new strategies?

What of those experiencing art, who must now zoom into the fine details of a masterpiece to look for the very signs of generative AI? How many fingers have you counted this year, compared to years prior? Have you had to delete saved works that turned out to be computer-made? Do you have to use extensions, search engine queries, or specific websites to find works made by human beings? How long until these too grow disabled from the progress of generative artificial intelligence? How much longer do art award shows have to analyze and process the background of individual contestants now?

Guillermo Del Toro has said that AI art “demonstrated that it can do semi-compelling screensavers. That’s essentially that”, and that “the value of art is not how much it costs and how little effort it requires, it’s how much would you risk to be in its presence? How much would people pay for those screensavers? Are they gonna make them cry because they lost a son? A mother? Because they misspent their youth?”. This is the despotic consequence, the terrible illness that is infecting how we interface with art, the sheer inability for us to integrate into our souls the happiness, the sadness, the anger and the fear that art creates because of the possibility that these emotions have not originated in the living.

This is the argument. This is the problem. All other petty issues you can think of, ranging from the copyright concerns to the career impacts, are irrelevant in the face of the loss of true, great art. Being outraged at someone for using generative artificial intelligence to create for themselves a profile picture of an elephant is nothing compared to how we’ve become unable to properly interact with art. If anything, such fights are pointless and devalue the real struggle, the problem of the loss of great art.

This is the reason why the NHAF even is. The NHAF does not exist to criticize artificial intelligence, for it has already been said that it can be good. The NHAF does not even exist to criticize generative artificial intelligence, because as a virgin concept, it is fine. The NHAF exists to combat the greatest misuse of generative artificial intelligence, which is its displacement of great art, made by human beings, and the most efficient way to combat this misuse is by funding artists who are making great art and should live off of their art.

I stand anxiously at the loss of the great artist. I haven’t exactly hidden my immense devotion and reverence for artists; it permeates all that I do and write, it is the origin of the NHAF, it is a foundation of my civility. This organization exists to safeguard a strong component of a great society, something objectively at risk because of generative artificial intelligence. But we must rationally approach this, not to throw out the baby with the bathwater; artificial intelligence can be good, and generative artificial intelligence can only be limitedly criticized vis-à-vis its usage, and going any further would be to harm our efforts at preserving the soul of art.

Therefore, this is the position of the NHAF on artificial intelligence:

  1. artificial intelligence can be a force for good.
  2. artificial intelligence however is being misused, and this misuse has already generated wide-reaching consequences.
  3. Use of artificial intelligence must be curated to retain its assistive capacity, and never developed into a replacement for any human function.
  4. The outputs of artificial intelligence should never be equated in value to the works of human beings.
  5. Great art can never come from generative artificial intelligence.

Addendum: AI has surpassed assistance

I’ve mentioned earlier that some are lying to themselves in saying that artificial intelligence is already assistive, that we have already reached what AI should be used for. This is a lie insofar that the person who says this has already assumed this is how it will be for the future, with only iterative improvements on the intelligence of the models or the efficiency of the workflow. This automatization of the workforce will slowly spread upwards, making the engineer-manager redundant in turn.

However, it must be conceded that generative artificial intelligence has reached the capacity to be assistive; the problem is that it has surpassed it. You can use AI in a manner that is purely assistive and, therefore, not as damaging as its otherwise more normal uses; you can use AI to review what you’ve made, to teach you when you’ve made mistakes, and as a tutor. These would have been the otherwise nominal uses of large language models, because they all involve cooperating with you through language.

AI, however, has surpassed the assistive capacity and went on to assume the replacing capacity. The outmoding capacity. This is the problem, and the root of everything. If you are using AI in the assistive capacity, then you are using what it is meant for, and therefore you can hold yourself mildly more moral than the rest, even if still impeachable. This must be discerned because I foresee some critics chiming in to say that “AI is already assistive, so what you’re saying is outdated!”, which is nonsense.

Footnotes

  1. Notwithstanding the hypothesized existential risks first posited by twentieth century philosophers.

  2. Furniture as in the furniture music of Erik Satie.

← Back to all news