How AI is poised to unlock innovations at unprecedented pace – POLITICO

Artificial intelligence (AI) has rapidly evolved from future promise to present reality.  Generative AI has emerged as a powerful technology applied across countless contexts and use cases — each carrying its own potential risks and involving a diverse set of stakeholders. As enterprise adoption of AI accelerates, we find ourselves at a crucial juncture. Proactive policies and smart governance are needed to ensure AI develops as a trustworthy, equitable force. Now is the time to shape a policy framework that unlocks AI’s fullest beneficial potential while mitigating risks. 

Proactive policies and smart governance are needed to ensure AI develops as a trustworthy, equitable force.

The EU and the pace of AI innovation

The European Union has been a leader in AI policy for years. In April 2021, it presented its AI package, which included its proposal for a Regulatory Framework on AI. 

These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

We must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.

Enterprise AI is designed and trained specifically for business settings, while consumer AI is open-ended and available for use by anyone. Salesforce is not in the consumer AI space — we create and deploy enterprise customer relationship management (CRM) AI. This means our AI is specialized to help our customers meet their unique business needs. We’ve done this with Gucci through the use of Einstein for Service. By working with Gucci’s global client service center, we helped create a framework that is standardized, flexible and aligned with the brand’s voice, empowering client advisers to personalize their customers’ unique experiences.

Aside from their target audiences, consumer and enterprise AI differ in a few other key areas:

  • Context — enterprise AI applications often have limited potential inputs and outputs due to the business-specific design models. Consumer AI usually performs general tasks that can greatly vary depending on the use, making it more prone to misuse and harmful effects, such as exacerbating discriminatory outcomes due to untrained data sources and using copyrighted materials.
  • Data — enterprise AI systems rely on curated data, which generally is consensually obtained from enterprise customers and deployed in more controlled environments, limiting the risk of hallucinations and increasing accuracy. Meanwhile, consumer AI data can come from a broad range of unverified sources.
  • Data privacy, security and accuracy — enterprise customers often have their own regulatory requirements and can request that service providers ensure robust privacy, security and accountability controls to prevent bias, toxicity and hallucinations. Enterprise AI companies are incentivized to offer additional safeguards, as their reputation and competitive advantage rely on it. Consumer AI applications are not beholden to such stringent requirements.
  • Contractual obligations — the relationship between an enterprise AI provider and its customers is founded on contracts or procurement rules, clarifying the rights and obligations of each party and how data is handled. Enterprise AI offerings undergo regular review cycles to ensure continuous alignment with customers’ high standards and responsiveness to evolving risk landscapes. In contrast, consumer AI companies provide take-it-or-leave-it terms of service that inform users what data will be collected and how it may be used, with no ability for consumers to negotiate tailored protections.

Policy frameworks for ethical innovation

Salesforce serves organizations of all sizes, jurisdictions and sectors. We are uniquely positioned to observe global trends in AI technology and to identify developing areas of risk and opportunity.

Humans and technology work best together. To facilitate human oversight of AI technology, transparency is critical. This means that humans should be in control and understand the proper uses and limitations of an AI system.

Another key element of AI governance frameworks is context. AI models used in high-risk contexts could profoundly impact the rights and freedoms of an individual, including economic and physical impact, or impact a person’s dignity, right to privacy, and the right to be free from discrimination. These ‘high-risk’ use cases should be a priority for policymakers.

Humans should be in control and understand the proper uses and limitations of an AI system.

The EU AI Act does just that — addresses the risks of AI, and guarantees the safety of people and businesses. It creates a regulatory framework that defines four levels of risk for AI systems — minimal, limited, high and unacceptable — and allocates obligations accordingly.

Comprehensive data protection laws and sound data governance practices are foundational for responsible AI. For example, the EU’s General Data Protection Regulation (GDPR) shaped global data privacy regulation, using a risk-based approach similar to the EU AI Act. It contains principles impacting AI regulations: Accountability; fairness; data security; and transparency. GDPR sets the standard for data protection laws and will be a determining factor in how personal data is managed with AI systems.

Partnering for the future

Navigating the Enterprise AI landscape is a multistakeholder endeavor that we cannot tackle alone. Fortunately, governments and multilateral organizations like the United States, the United Kingdom and Japan, the U.N., the EU, the G7, and the OECD, have initiated efforts to collaboratively shape regulatory structures that promote both innovation and safety. By forging the right cross-sector partnerships and aligning behind principled governance frameworks, we can unleash AI’s full transformative potential while prioritizing humans and ethics.

Learn more about Salesforce’s Enterprise AI policy recommendations.


Read original article here

Denial of responsibility! Pioneer Newz is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment