This is NOT the question. In fact, there is no question you are already using it. Chat GPT came out in November 2022, was adopted, and marketed and grew quickly. Then came Bard and Bing. Each has its uses and nuances. The ability to detect a Bard, Bing or Chat generated item has also quickly grown in academia, the law, google search engines etc.… A myriad of discussion of the end of humanity and or the terminator take over. We also see discussion on relief for those employers who can’t find employees to work behind a desk doing repetitive mundane work, writers and designers are soon out of business. Why pay them when you can just ask Chat GPT to write and Dale to make the image? (BTW no Chat GPT, Bard or Bing was used for this blog.)
My son-in-law is a UX specialist for a large publicly traded company. He started off in graphic design and has been in this field for many years. I asked him if he was concerned about a Bot taking over his job. He was surprised by how fast these AI bots adapted and grew but upon reflection recognized his job was creativity and human understanding. He said I simply need to be better than a bot at both.
Earlier this week we had a discussion with one of our firm clients. They want us to go in and find the errors created by AI that is already built into the GL apps and correct those errors. We also stopped a test for a new AI product that worked so poorly and was impossible to turn off. I don’t really blame the AI bot as the original coders did not understand two things. 1. The principles of basic bookkeeping, example differentiate between and asset and liability, and 2. The nuances of how a particular client chose how, when why and where they spent and moved money. The ensuing gasps were heard 12 time zones away. The programmers responded quickly and adapted fast and are already testing the fix so we can then test the efficacy.
AI is ubiquitous in all societies. At some level each of us is using a bot to deduct, present a prediction and allow us to quickly plan a move. The newness is in the generative promise that marketing and hype with its constant pushing in social media, for clicks, presents to both terrify and depress. Cuz if it bleeds, makes you bleed then it leads. I have no doubt that these are early days, and the abilities are infinite. I read I Robot, loved the death of M5 and affirmation of Captain Dunsel on Start Trek, had my own small Robbie the robot as a kid. Stories of robots helping and taking over are in equal abundance.
Helping is the best part of the current AI in the Accounting industry. The predictable moves it can auto generate in our work apps really does help. The 80/20 of the predictions within the GL at least cuts down on EBOK, (error between operator and keyboard.) We must watch for the 20% that is EBBHC (error between bot and human choices.)
I can define how I want work to flow, how I want emails written and when they get sent. I can use AI to aid in defining the steps of work. That does require I make some hard fast rules with my team on how we want this done and then we can attach the bot to do its job. If, however, I have a rogue who decides they will do things their way, bypass the rules and never tell anyone then, well the bots will get the blame will they not? In this regard I need all hands-on deck in agreement and when something doesn’t work shout it out and together, we find a better way for the bots to behave. Now that is in the actual work apps that aid us in our daily tasks. But what about the AI in the apps that manage the ever-updating transactions? The basis for all things accounting. Yep, I can use some of those predictive qualities to generate correct results. But the ‘rogue’ client that does things willy nilly, does not communicate, loses connectivity, changes often… Well that 20% I can’t control. It needs review by someone that understands the efficacy of the situation. Like the coders that gave us that AI to evaluate, who didn’t understand asset v liability. They needed to be redirected as they did not see the issue. The bot completed the test. The client wasn’t wrong as they got what they wanted; the bot did not see a wrong because it completed its task, we however can see the outcome of not correcting the course. Alpha Failure takes a human to discover. At least for now.