The Guide to Implementing AI at Your IDD Agency
Kibu Team

Artificial intelligence is showing up everywhere. It writes emails, answers questions, and suggests ideas. But if you run an IDD agency, you might be wondering what it can really do for your team. You might also be wondering where to start without making a mess or putting the people you support at risk.
This guide is for managers and executives at provider agencies who want a clear path forward. We will keep it simple. No buzzwords. No hype. Just what to do, what to avoid, and what to think about before you begin.
Why This Matters Right Now
The IDD field is stretched thin. According to ANCOR's 2025 workforce report, most providers still report moderate or severe staffing shortages. Turnover among Direct Support Professionals hovers around 40 percent each year, and every DSP you have to replace costs your agency thousands of dollars. On top of that, documentation takes hours away from the people you support.
AI will not solve the workforce crisis on its own. But when used well, it can give your team back time. It can help new staff write better notes. It can make person-centered planning feel more personal. And it can ease the administrative weight that leaders carry every day.

Step 1: Pick One Problem to Solve First
The biggest mistake we see is agencies trying to roll out AI across everything, all at once. Pick one area where your team is struggling and where the risk is low. A smart place to start for most IDD providers is around content and curriculum work. Think activity planning for day programs, fresh class ideas, onboarding and training materials for new DSPs, or family-facing newsletters. None of this involves Protected Health Information, so your team can experiment, build confidence, and show early wins without putting member data at risk.
Ask your managers and program staff a simple question: what kind of planning or content work eats up the most of your time? Their answers will point you to the right first project. Pick that single spot, and use it to prove value. Once your team has built some AI-muscle and you have a HIPAA-safe platform in place, you can expand into higher-value areas like progress notes, ISP drafting, and person-centered planning.

Step 2: Choose a Tool Built for Your World
Not every AI tool is safe for healthcare and human services. If you put member information into a general chatbot, you could be putting your agency at risk. AI tools that touch Protected Health Information are covered by HIPAA. That means the entire pipeline, from training data to model outputs, needs to meet HIPAA standards.
๐ For more info on this, check out our previous blog covering AI and HIPAA requirements!
Before you sign anything, look for these three things:
- A signed Business Associate Agreement, or BAA. This contract holds the vendor to the same privacy rules you follow.
- Clear data practices. Ask where data is stored, who can see it, and whether it is used to train outside models. If a vendor cannot answer plainly, keep looking.
- Features built for disability services. General AI tools do not know what an ISP is. A platform built for IDD providers will understand waivers, service notes, outcome tracking, and the language of person-centered care.

Step 3: Run a Small Pilot Before Going Organization-Wide
Pick a small, willing team. Maybe one day program, or maybe one house. Have them use the tool for about 30 to 60 days. During that pilot, track three things: time saved, quality of output, and how staff feel about it. The staff feeling part might be the most important for long term success. If your DSPs do not trust the tool, they will not use it - despite how much time you think it could save you.
Build a short feedback loop. For this, weekly check-ins work well. Ask what is working, what isnโt, and what might be confusing. Make sure you can review a sample of AI-assisted output to confirm quality. This is your chance to catch mistakes before they get set in stone.

Step 4: Train Your Team on What AI Is, and What It Is Not
Your staff will have questions. Some will be excited. Some will be worried about their jobs. A short, honest training goes a long way. Cover three things:
- What the tool can do. Show real examples.
- What the tool cannot do. It cannot replace a DSP's relationship with a member, and it cannot make clinical judgments.
- How to use it safely. Always review the output. Never paste in information the tool was not approved to handle.

What to Do
Use AI to draft, not to decide. Let it write the first version of a progress note based on a DSP's voice recording or quick bullet points. Then have the DSP review, edit, and sign off. The human stays in charge.
Lean on AI for person-centered planning support. ISPs can get long. AI can help your team pull themes from years of notes, flag goals that have stalled, and write cleaner plan language that reflects the individual's voice. Good AI tools can summarize patterns across sessions so your planners can focus on the person instead of the paperwork.
Build a simple review process. Every AI output should have a human check before it becomes part of an official record. This is your quality safety net.
Measure the impact. Track documentation time before and after. Track audit findings. Track staff satisfaction. Numbers will help you decide where to expand next.
Get buy-in from the top down and the bottom up. Your Executive Director should be visible in the rollout. Your DSPs should feel heard. When both groups are rowing in the same direction, adoption sticks.

What Not to Do
Do not let AI make decisions about the people you support. Staffing matches, behavior plans, and care decisions need human judgment. AI can suggest. Humans decide.
Do not use AI to replace DSP relationships. The magic of this field is the connection between a DSP and the person they support. No algorithm can do that work. AI should free your DSPs to spend more time with the people they support.
Do not feed AI tools information they are not allowed to see. Generic AI tools without a BAA should never handle member names, diagnoses, behaviors, or any other Protected Health Information. This is a compliance risk that can cost your agency real money and real trust.
Do not trust every output. AI tools make mistakes, and they can reflect bias from the data they were trained on. Review everything. Always.
Do not skip your staff in the rollout. If your team feels surprised or surveilled, you will lose trust fast. Bring them in early. Explain what is changing. Listen to their concerns.
Special Considerations for IDD Providers
Person-centered means person-centered, even with AI. The people you support should never feel like data points. Be thoughtful about language. When AI summarizes a member's goals or history, those summaries should sound like a real person wrote them about a real person.
Think about consent and dignity. Families and members deserve to know how their information is used. Many providers are starting to add AI disclosures to intake paperwork. Transparency builds trust.
Watch for bias in outputs. AI can miss nuance, especially for people with complex support needs. If a tool keeps suggesting something that does not fit your population, raise the issue with your vendor. Good vendors want this feedback.
Plan for state rules on top of federal rules. Several new state AI laws took effect in 2026, and more are coming. Your compliance team should stay current on what applies in the states where you operate.
Prepare for audits. AI-generated documentation should look clean, accurate, and person-specific. If every note sounds the same, auditors will notice. Train your team to personalize AI drafts so that each note reflects the individual it is about.

Where Kibu Fits
AI hits a real limit. It can brainstorm class topics and draft discussion prompts, but it cannot teach a fitness class, lead a job readiness workshop, or be the coach your members look forward to seeing each morning. That kind of content has to come from real people.
That is where Kibu starts. Kibu gives IDD providers a library of nearly 600 pre-recorded video lessons covering life skills, job development, fitness, financial literacy, and more, with new classes added every week. Your members also get daily livestream classes led by real coaches they can see, hear, and connect with. It is ready-made, human-created programming your team can plug into tomorrow, without asking a DSP to build curriculum from scratch.
Then Kibu layers the AI on top of that foundation. Service note drafting, life plan tracking, attendance capture, and intelligent reporting are all supported by AI tools tuned for disability services. The Business Associate Agreement is already in place. The AI features are built for ISPs, progress notes, and person-centered planning. Your team gets a turnkey solution, so you do not have to stitch together a compliance program, a content library, and an AI stack on your own just to experiment.
If you are ready to see how real content and smart AI can fit into your agency without the implementation headache, schedule a quick demo and we will walk you through what is possible.
The Bottom Line
AI is not a silver bullet. It will not fix wages, funding, or the workforce crisis. What it can do is give your team a little bit of their day back. It can lighten the documentation load. It can make person-centered planning feel more personal. It can help new DSPs feel supported while they learn.
Start small and pick one problem. Use a tool built for this work. Train your team. Review the outputs. Then grow from there.
The agencies that thrive in the next few years will be the ones that use smart tools to free their people to do what they do best, and Kibu is there to help you along the way!
Frequently Asked Questions
Will AI replace our DSPs or other frontline staff?
Never! AI helps your staff work faster and smarter. The human relationships at the heart of IDD services still depend on real people. The goal of AI is to take paperwork and planning work off your DSPs' plates so they have more time for direct support. When we hear agencies worrying about replacement, the fix is usually clear communication. Tell your team from day one that AI is there to help them, and back it up with how you roll it out.
Is it safe to use AI with member information?
It depends. AI tools that handle Protected Health Information need to be covered by a signed Business Associate Agreement. They also need proper data security, access controls, and clear policies on what the vendor does with your data. Free, general chatbots do not meet these requirements, and using them incorrectly is a HIPAA violation. Always check the BAA and data practices before your team uses AI on anything member-related.
What is a realistic timeline for seeing results from an AI pilot?
Most agencies see early wins within 30 to 60 days of a focused pilot. Time saved on content planning, faster onboarding for new DSPs, and smoother documentation workflows tend to show up first. Bigger outcomes like improved audit readiness and better staff retention usually take 6 to 12 months. The key is to track a few clear metrics from day one so you can see what is actually changing.
What if our DSPs are not comfortable with new technology?
This is one of the most common concerns we hear, and it is a fair one. The good news is that modern AI tools are built to be simple. A DSP who can text on a phone can usually pick up an AI documentation tool in an afternoon. Short hands-on training, a peer champion on each team, and leadership showing up supportive during the rollout go a long way. If a tool feels too complicated for your team, that usually means the tool needs work. If your staff can use YouTube or Facebook, they'll be able to pick Kibu right up. Just saying!