3 Minute Read

The Frontier Model Forum; What Is It and How Will It Help Regulate AI?

The Frontier Model Forum (FMF) is a newly announced partnership aimed at promoting the responsible and safe development of AI models. In this blog we delve into what it means for the AI industry.

Engrained into our everyday lives through technologies such as facial recognition, digital assistants, and smart cars, the era of AI is well and truly upon us, and there are no signs of its substantial growth stagnating. In fact, the AI market size is projected to reach $407 billion by 2027; representing an annual growth rate of 37.3% from 2023 to 2030 [1].

Alongside this, businesses are also recognising the potential of AI and are increasingly leveraging it to streamline their operations, enhance data-driven decision making through data analysis, automate repetitive tasks and improve customer services. To provide some context to this, according to, in the UK alone almost half a million businesses had adopted at least one AI technology in their operations at the start of 2022 [2]. 

And yet, whilst the AI industry has continued to advance and adoption has increased, there has been little development in the mitigation of AI-associated risks, regardless of the growing concerns about cyber-security and regulatory compliance of artificial intelligence within organisations [3]. 

Now, don’t get us wrong, we’re not convinced we’re going to have an iRobot situation on our hands any time soon, however it cannot be denied that there are potential risks associated with the use of AI technology, and an urgent need for regulation to address these concerns.

This is where the Frontier Model Forum comes in to play…

Want to become more data-driven? Download our ebook today to find out how.

Introducing The Frontier Model Forum 

The Frontier Model Forum (FMF) is a newly announced partnership aimed at promoting the responsible and safe development of AI models. 

Formed by Microsoft, Google, OpenAI and Anthropic, this new industry body has set out to cover four core objectives: 

  1. Advancing AI safety research
  2. Identifying best practices
  3. Collaborating with policymakers, academics, civil society and companies
  4. Supporting efforts to develop applications that can help meet society’s greatest challenges 

Whilst these four tech-giants have founded the FMF, their aim is to establish an Advisory Board by inviting member organisations to contribute towards its strategy and priorities. Organisations that wish to join the forum will need to meet the following membership criteria: 

  • Develop and deploy frontier models (large-scale ML models that are capable of performing an extensive range of tasks that go beyond what is currently possible with even the most advanced existing models)
  • Demonstrate strong commitment to frontier model safety 
  • Are prepared to contribute towards advancing the FMF’s efforts 

The aim of the Frontier Model Forum is then to leverage the collective technical and operational knowledge of its member companies to benefit the overall AI ecosystem. This includes driving progress in technical evaluations and benchmarks, as well as creating a public repository of solutions to promote industry best practices and standards. Through these collaborative efforts, the Forum seeks to contribute to the advancement and development of the AI industry as a whole. 

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.” Brad Smith, Vice Chair & President, Microsoft.


Our thoughts

In our perspective, AI presents a range of risks – job displacement, security & privacy concerns, bias and discrimination to name a few. However, we believe the primary concerns related to AI revolves around the absence of regulation, and the lack of clear guidelines. This is why we consider the launch of the Frontier Model Forum to be a highly encouraging and indispensable development which will help to mitigate risks, establish industry-recognised standards and reduce potential negative social impact. 

By bringing together experts and industry leaders, it will foster a collective effort to:

  • Reduce potential negative impact
  • Safeguard society’s interest
  • Ensure the responsible and ethical use of AI 

The Frontier Model Forum has the potential to shape the future of AI in a way that minimizes risks, enhances transparency, and creates a more secure and accountable environment for AI development and deployment, so we can continue to reap the benefits made possible by AI and unveil further progress in the field of AI, all whilst effectively managing the associated risks. 

Discover how to ace your omnichannel analytics with our latest ebook


Read more blogs like this:

Building a data-driven culture in 10 simple steps

The term “data-driven” is everywhere. Many people seem to be making a lot of fuss about it nowadays. But rightly so, in our opinion—because if the successful businesses of tomorrow share one thing, they’ll all be data-driven.
Time icon
6 Minute Read

A Beginner’s Guide To Web Scraping

Web scraping wins hands down for the ugliest name in data analytics. Oddly enough, it’s also known as web harvesting, which, to our mind, sounds much less aggressive and is a better indicator of what it actually does. But no mind, web scraping seems to be the preferred term, and it’s a vital tool for digital professionals worldwide.
Time icon
5 Minute Read

A Day In The Life Of A Junior Analyst

Meet Louise, a Mathematics student at the University of Bath, who is currently midway through her internship at Ipsos Jarmany. Louise supports one of our flagship accounts as a Junior Analyst (JA), working across EMEA. In our latest blog, she delves in to what it’s like to be a JA at Ipsos Jarmany, what a typical day looks like, and her highlights so far.
Time icon
5 Minute Read