Artificial intelligence is currently beheld as a limitless source for queries, ideation and content creation. This year has prevailed as the year that AI has truly taken off in the mainstream audience.
With Chat GPT giving any interested onlooker the opportunity to get to grips with the true potential of this resource, it has sparked both excitement and fear across all industries.
Google has announced plans to update their search engine with generative AI, allowing users to delve deeper into their research than ever before. It is only a matter of time before more and more companies actualise the potential of this technology.
Will it make jobs redundant? Can it really write degree-level essays that won’t be detected by plagiarism scanners? How far can this technology go, and, more importantly, how far should it go?
Fears of the unknown
Resistance against the mainstream progression of AI was made clear earlier this year when an open letter signed by thousands, called on all AI labs to immediately pause training AI systems more powerful than GPT-4 for at least 6 months.
Fears stemmed from individuals, including Elon Musk, lacking confidence that the effects of AI systems will be positive and their risks manageable.
Plans for regulating AI are already in full-swing across the globe. These plans are coming directly from governments, who acknowledge how AI will have a positive impact but are aware of risks.
The European Commission has developed a risk-based approach to define the levels of risk in AI. Going from minimal risk to unacceptable risk, this framework seeks to categorise AI systems.
Meanwhile, the UK government have published proposals for a regulatory framework for AI. The goal of this paper is to “provide a clear, pro-innovation regulatory environment”, making the UK a top place for building foundational AI companies. The framework outlines a common-sense approach aimed to foster innovation without compromising safety or privacy.
As AI continues to develop at a fast pace, government and regulation decisions move at a very slow pace. What Musk and others in support of regulations believe is that if we start debating regulations now, then we won’t be too far behind when regulation is officially passed.
Regulation is inevitable when looking at software that has the potential to impact livelihoods. When the topic of data comes into the mix, an area where there are constant security threats, AI regulation could prevent the misuse of sensitive data.
Opponents of AI regulation have continued to call for the deregulation of AI, arguing that it would be impossible to regulate all aspects of AI that impact human life.
Argument against regulating AI fears that regulation at this stage could stifle the growth of this technology, preventing it from reaching its full potential. AI technology experts such as Alex Loizou, co-founder of Trouva, actively oppose any form of regulation of AI before it can be fully understood.
Regulating AI is essential when considering the rate of growth in this area, and its use of data to inform its outputs. What government institutions have made clear in their proposals is that they want to facilitate the growth of AI while ensuring the safety of individuals.
Exceptional digital experiences built on insight and strategy
Utilising data-driven decision making, the teams at Propeller are empowered to build smart strategies that engage your customers and build relationships that last. It’s about more than a beautiful website – as a strong growth marketing agency, we ensure every move, every click, every feature is defined by a clear goal.
With a unique expertise in developing digital platforms and growth marketing, our team works on integrating the best technology stack tailored to our clients.
Our teams’ expertise in growth marketing strategy is why we’ve consistently worked with a number of FTSE listed businesses for over a decade, developing and refining the brands’ digital growth as the landscape has changed.