- Benevolently
- Posts
- Journey to Responsible AI
Journey to Responsible AI
Let's use AI responsibly 👾
What is Responsible AI? 👾
There are many AI technologies as of late racing to release and implement the latest and great. Many startups doing similar things while searching for their niche and many corporations looking to hop on the ai train.
But with great power and technology comes great responsibility. AI is a GENERAL term for multifaceted use cases. It can be a tool to help detect cancer and tumors. It can write essays and answer homework questions. Depending on how it's used, there are many many benefits - and that's what we are all aiming for right?
Undoubtedly, there are and will be bad actors in this space and use bad unresponsively. That's where companies like Credo.ai come in. Credo.ai is attempting to take on this new commercial mainstream wave of AI (currently hottest is generative AI) by implementing an "intelligence layer for AI projects across your organization. Track, assess, report, and manage AI systems you build, buy, or use to ensure they are effective, compliant, and safe." (https://www.credo.ai/). Within 3 short years of being alive, they have already racked up many achievements such as partnering with Databricks and presenting at the white house - you can find more here: https://www.credo.ai/blog/2023-pioneering-a-new-era-in-ai-governance.
So what are they exactly doing with responsible AI?
The way they are selling their business seems to be a SaaS sort of model - Perhaps I can coin it here first if it hasn't been said yet already - Responsible AI as a Service (RAaaS?). They split their services into 6 - AI Guardrails, Vendor Risk Assessment, AI Adoption tracking, Regulatory Compliance, Scalable AI Governance, and Audit Artifacts. The overall bottom line is that they are looking to minimize any risks as the user defines risk with these services. IMO it seems less of responsible ai and more of having better control over your inputs and outputs. There is no set defined laws or a global set of guidance on what exactly responsible ai is so companies are racing to become self sufficient so when the time comes, they can defend with those set of self defined responsible ai.
How do these different companies respond to what is responsible ai?
Accenture says "Responsible AI is the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society—allowing companies to engender trust and scale AI with confidence" Link
Microsoft says "The Responsible AI Standard is the set of company-wide rules that help to ensure we are developing and deploying AI technologies in a manner that is consistent with our AI principles. " Link
Amazon doesn't have a one-liner. They say: "At AWS we are committed to developing AI responsibly, taking a people-centric approach that prioritizes education, science, and our customers, to integrate responsible AI across the end-to-end AI lifecycle." Link
And Anthropic's whole company is around responsible AI bounded tightly togehter their set rules of ethics and responsibility. You can read more here: https://www.anthropic.com/news/anthropics-responsible-scaling-policy.
Instead of "Use it at your own peril" type situation they're giving you proper guidance to help you by mitigating risk - of course whether the use actually follows these set of guidelines is a different story.
Join me in my journey towards discovering Responsible AI. Please feel free to reach out if you have anything you'd like to discuss around ai or responsible AI. It's exciting times, folks and let's experience this wave together :)