top of page

Search Results

102 results found with an empty search

  • CSR and Sports in India: Emerging Trends and Implications | Pacta

    < Back CSR and Sports in India: Emerging Trends and Implications DOWNLOAD Previous Next

  • Anti-Sexual Harassment Laws & Classical Arts | Pacta

    < Back Anti-Sexual Harassment Laws & Classical Arts DOWNLOAD Previous Next

  • AI Adoption by Indian NPOs | Pacta

    AI Adoption in Indian Social Sector Programs: Early Insights, Emerging Stakeholders and Future Directions Introduction Artificial intelligence (AI) is quietly revolutionising how India addresses its social challenges, offering transformative solutions across sectors. From personalising education in urban slums to empowering farmers with real-time crop health advisories in remote villages, AI is reshaping the landscape of social innovation. India continues to expand its role in AI for social impact, and it is critical to glean early insights from adopters of AI to ensure democratisation of AI technologies. India’s non-profit organisations (NPOs) are beginning to explore AI’s potential for social impact, but adoption remains in its early stages. The report AI Adoption in Indian Social Sector Programs: Early Insights, Emerging Stakeholders and Future Directions , aims to provide a clearer picture of AI adoption in India’s non-profit sector, identify gaps, and highlight opportunities for sustainable, equitable and impactful AI integration into tech4good narratives. Provided below are summarised insights from the report. Click here for the full report Listen to the audio overview here Contents What are the different variables influencing adoption of AI by non-profits? How has India adopted and deployed AI so far? Who are the key players in the AI ecosystem? What are the different risks in AI deployment for the social sector? How can non-profit organisations ensure fair and effective use of AI? Recommendations for strengthening AI adoption in the social impact ecosystem What are the different variables influencing adoption of AI by non-profits? variables influencing adoption of AI Founder's Attitude The adoption of AI in non-profits' programs is strongly influenced by the founder's attitude towards the technology and their understanding of it “If the founding team recognizes that AI is a tool with the potential to improve efficiency, drive innovation, and scale operations, this can significantly push the organization towards AI adoption. Their belief in the tool's value is often a catalyst for change.” -Ooloi Lab Limitations of AI AI systems often generate plausible but inaccurate information about local customs, risking misguided interventions. NPOs know that information must be context-sensitive and delivered with care. Such missteps can damage community trust and harm an NPO’s reputation “We wanted to ensure that safeguarding of users was not something we compromised on. For example, if I asked for safe at-home abortion, it was giving me advice to do it, which we never wanted it to do.” - One Future Collective Limited Understanding of AI Despite the optimistic expectations of how AI can be leveraged in the social sector, the organisations are unaware of what, how and when to use AI tools in their programs “The challenge is that organizations are often unaware of when, what, and how to use these tools. The conversation we are having with funders and partners is that the system and technology are already in place. Now, the focus should be on training people effectively, ensuring good quality curriculums are in place to help organizations adopt these tools and solutions in a better way. Our priority is not just building AI solutions but creating awareness and ensuring the right set of AI tools are adopted by the right people.” - 10x Impact Labs Challenges in Hiring AI Experts Organisations collecting data are inclined to hire data scientists / engineers which could be expensive for NPOs “ AI experts are in high demand and very expensive, which makes it difficult for an NGO like ours. We have been trying to hire AI professionals for the past two to three months, but it has been very tough. So, we plan to work with universities and colleges to hire fresh talent and train them. However, keeping them with us is also a challenge” - MSSRF Data Availability, Policies and Sharing Restrictions The lack of well-organised usable data, open data sets, coupled with the absence of a consistent data standard, and the lack of inclusive datasets tailored to specific social contexts, make it challenging to build AI systems in the social sector that can reliably process and analyse information “We are particularly concerned about representation and information bias, especially since we often work with first-time technology users. If we fail to provide unbiased information, we risk disillusioning these communities. Therefore, it is essential to gather knowledge from the ground up, ensuring that the data we collect accurately represents the communities we serve.” - Gram Vani Funding constraints The availability of funding determines how persistent the organisation can be in developing their AI tool. Pathways to government adoption and scaling drives funder motivation. “There is often a risk associated with creativity in AI development due to financial constraints. Developing innovative solutions can be costly, leading organizations to play it safe rather than pursue potentially groundbreaking ideas. This is where initiatives like ours become valuable; we provide a space where individuals can experiment with their ideas at low risk by connecting them with volunteers who share their vision.” - People Plus AI Accessibility to Digital Infrastructure Absence of digital infrastructure and internet in remote and rural areas and gender based disparities in access to smart-phones and internet holds back non-profits in adopting AI in their programs “India may do well in terms of engineers but falls behind in data center capacity. This gap poses a challenge because even if we have innovative solutions, without adequate data storage and processing capabilities, we can't deploy them effectively. It's akin to owning a car without having a road to drive on.” - People Plus AI Collaborations and Networks Collaborations with tech enablers, government, academia, big-tech platforms like Microsoft, Google and access to these networks shape AI adoption by non-profits “For a pilot of the AI based chatbot, we worked with Gritworks since we did not have in-house expertise. We had built our own static chatbot on Glific. To pilot using AI, we got Ullas to help us. Ullas' expertise in building tech products for the sector helped us accelerate the process.” - FMCH User Digital Habits and Perceptions Non-profits have to navigate digital habits of people to get people to trust AI tools and use them consistently. Habits such as surfing Youtube etc when coupled with low storage capacity on the phone, means that NPOs need to constantly motivate their user base to use AI powered intervention “Limited phone storage is a major barrier. Many smartphone users fill their devices with media from YouTube and WhatsApp, leading them to delete apps when space runs out. Farmers may use our app daily during sowing season but forget about it in the lean season, often deleting it to free up memory.. Reacquiring users each season adds costs. Hence, keeping the farmers engaged throughout the year is essential.” - Digital Green India's adoption and deployment How has India adopted and deployed AI so far? India ranks... 1st 5th 2nd AI skill penetration & AI talent concentration* AI scientific publications* AI deployment by social innovators** * Nasscom Report ** World Economic Forum’s AI for Impact report AI Deployment by Indian non-profits AI tools used by Indian NPOs in their programs are mostly limited to simple chatbots and workflow automation. A significant uncertainty about the need for AI in social impact operations exists, highlighting a gap in awareness or readiness to adopt AI technologies. Some Examples of AI Deployment by Indian NPOs Type of Intervention : Chatbot Foundation for Mother and Child Health aided by Gritworks AI Intervention : AI Chatbot Pilot : with 150 families and then improved to provide better responses Usage : provides mothers with timely, accurate information about maternal health and malnutrition Future Vision : scaling to other states and languages, enhance accuracy and user engagement What AI development and deployment models are Indian NPOs subscribing to? AI Development Models Funding Models for AI Tool Development Source Code Models for AI Development Types of resourcing for AI development : - In-house - Outsourced - Hybrid Some NPOs have the technical capacity to build AI tools in-house, while others partly or wholly rely on external vendors or collaborators to develop the technology. In-house Vs Outsourced Allows for better control by the user team Requires technical resources and skilled personnel Long-term maintenance, including future tweaks, bug-fixes and updates is cost effective Data remains within the organisation reducing the risk of breaches Enables faster adaptation and response to feedback May divert focus from core social initiatives Needs continuous capacity building and monitoring of changing technologies Success depends on Tech Enabler’s approach Beneficial if there is limited technical capacity External resources for maintaining and running the application can be unreliable and expensive over time. Sharing data with vendors may entail security risks Reduces flexibility in adapting to changes & responding to feedback Allows the NPO to keep focus on the solution rather than technology behind it. Established specific use case or problem statement and a strong understanding of the community- specific requirements Types of source code for AI development : - Open-source model - Proprietary model Developing proprietary AI models require significant and continuous resource investment— in the form of data, computational power, technical and financial resources. Hence, developing proprietary models may not be feasible for most non-profits in India. Vs Employing pre-existing models or platforms reduces the financial, technical and data investments needed for AI interventions but require an additional layer of training to provide more contextualised outcomes. Three major models of funding for AI development : - Self-funded - External funding - Hybrid funding Some organisations may first self-fund an intervention to develop a proof of concept and establish the use case, then raise funds to actively pilot and scale the intervention. Success in raising funds for AI deployment by NPOs depends on: Successful proof-of-concept with potential to scale Presence of in-house technical capacity Promise of value and benefit to end users Uptick in AI investment trends in the sector Robust data governance, ethical, and RAI (Responsible AI) practices Clear deployment framework and AI roadmap Profiling Stakeholders in the AI Ecosystem Based on their Roles Who are the key players in the AI ecosystem? Key players Charting Stakeholder Relationships in the AI Ecosystem *Or provide recommendations on regulatory framework **Hardware, software, models, data, human resources, etc Note : Potential opportunities for stakeholder engagement , not yet established, are shown in bold black Stakeholder - Big Tech Companies As the largest and most influential technology companies in the world. Big Tech companies dominate their respective markets through their vast user bases, control over large amounts of data, and extensive technological ecosystems. Examples of dominant firms – Alphabet (Google), Amazon, Apple, Meta (Facebook), and Microsoft Potential Roles Provide funding for AI interventions and innovations in non-profits Play mentorship and advisory role to encourage the use of AI models and advancements to solve social sector problem-statements Provide access to technical resources such as cloud credits, pro-bono/low-bono access to tech tools Volunteer developers to aid non-profits How do Big Tech benefit? Big tech's AI models can be developed and tested in controlled settings closely resembling on-ground realities brought in by the non-profit organisations, and provides the potential to achieve population scale When providing funding, the big tech companies' technology services/ tools/ models are also likely to be deeply integrated into the grants offered How do NPOs benefit from them? NPOs receive technical expertise, access to models, infrastructure and community of coders which would be difficult to build on their own Relationships with big tech can be extractive for non-profits. The role of NPOs as data providers for Big Tech raises concerns about extractive data practices without reciprocal benefits to non-profits, and awareness as to value of the data beyond what is envisaged at first glance. NPOs must review the grant/ partnership agreements with Big Tech ensuring their (beneficiary) data is used ethically and not used for unintended purposes. A close understanding of the technology and data flow will also allow non-profits to maintain their autonomy, and act in the best interest of their beneficiaries. Stakeholder - Tech Providers and Enablers Tech Enablers and Tech Providers is an emerging stakeholder group that is a mission critical player in catalysing AI adoption in the social sector. Across Tech Enablers and Providers , there is a shared commitment to integrating technology into the social sector to enhance impact and efficiency. Their collective efforts aim to democratise access to technology, ensuring that even the smallest organisations can benefit from advancements in AI and data analytics. Examples : Tech Providers - Sarvam AI, Open NyAI, Karya and Tech4Dev Tech Enablers - Gram Vaani, People + AI and Ooloi Labs Potential Roles Tech providers develop data stacks, open-source tech tools and platforms for the social sector which can be customised for non-profit specific usage Tech enablers partner with NPOs to guide AI adoption, work with NPOs to customise, develop, deploy and scale AI in social sector programs based on their specific problem statements and context and embed ethical governance in their approach Play a key role in advancing the responsible and ethical use of AI by NPOs Trust building in the technology by bridging the gap between the big tech and NPO How do NPOs benefit from them ? Smaller non-profits that don't have advanced technical teams to explore and experiment with AI NPOs can create a proof of concept from hackathons and maker-spaces organised by tech enablers, and can build AI tools with volunteering support from tech enablers NPOs can build their specific tools based on models, tech stacks and tools developed by tech providers Awareness and capacity building about the ethical concerns to be factored in utilising AI Stakeholder - Government Government policies and regulations on responsible AI, algorithmic transparency and accountability, data privacy and digitisation of last mile service delivery have profound impacts on trust in technology, uptake of technology, digital infrastructure, and shaping the public perceptions on AI risks and potential. Potential Roles Provides regulatory frameworks and policy impetus for AI implementation Run or fund grant/ incubation programs focused on AI for social impact Initiate tech innovations such as the development of sovereign models Make available hardware and software resources and open datasets to lay a strong foundation and infrastructure How do NPOs benefit from them? Ideal scaling partner - NPOs view the government as uniquely equipped to scale AI-based intervention due to the government’s extensive resources, capacity, and nationwide reach. The government’s role as the largest service provider in health, agriculture, and education is critical to scaling solutions effectively. Alignment with Government is a Mark of Credibility - NPOs using AI for information dissemination rely on government approved content to ensure that AI responses are aligned with government driven narrative. Government initiatives to create an enabling climate for AI adoption for social impact : The competency framework is designed to equip public sector officials with AI-related skills, including competency mapping and upskilling initiatives. It follows global best practices to support informed AI policy-making and implementation. Regulatory Frameworks AI Competency Framework The platform is intended to provide 14,000 Graphic Processing Units (GPUs) (with 4,000 more coming soon) to support AI startups, researchers, and developers at affordable rates Access to Cloud & Computing Infrastructure India AI Compute Portal BharatGen is the world’s first government-funded multimodal LLM initiative that aims to enhance public service delivery and citizen engagement through foundational models in language, speech, and computer vision Access to Funding BharatGen Bhashini is an AI-powered language translation platform that enables real-time speech translation across Indian languages. It leverages AI, NLP, and crowdsourced data to break language barriers and support developers in building native-language tools and services Investments in Sovereign AI Models Bhashini Provides seamless access to quality non-personal datasets to facilitate AI innovation and research Better Quality Public Data India AI Datasets Platform A comprehensive initiative with a budget of ₹10,371.92 crore aimed at fostering AI innovation through public-private partnerships, including funding for social impact projects Public-Private Partnerships in Social Impact India Mission Stakeholder - Philanthropies and Social Impact Investors Philanthropic organisations often lead efforts to democratise access to AI and other powerful technologies. While their global impact is significant, in India, few philanthropies focus on funding interventions and supporting research and development in AI for social impact. Examples : ACT Grants, The Agency Fund, Wadhwani AI Potential Roles Advocate for democratised access to AI Fund AI pilots & scale AI interventions Invest in research & development, capacity building, development of open resources–models, governance frameworks, RAI frameworks Foster spaces to advance the discourse about AI in the social sector Stakeholder - Academia and Think Tanks Academia and think tanks in India and abroad offer mission-critical contributions to the advancement of AI in the social sector, contributing through research, policy development, awareness and capacity building, frameworks for RAI, developing open-source models, and catalyse sectoral partnerships and collaborations. Examples : Indian Institute of Science ARTPARK, AI4 Bharat, Digital Futures Lab’s RAIL Fellowship Potential Roles Research and Policy Research enables a scoping of potential benefits and challenges of AI in social impact sectors, such as healthcare, education, and governance and helps to develop policy recommendations to guide its effective use. Development of open-source models Academic initiatives also contribute to developing open-source resources that are valuable for the social impact ecosystem, as they are free of cost. Advocacy and Awareness Studies by think tanks and academia can inform advocacy for the responsible use of AI, ensuring that its deployment is equitable, inclusive, and aligned with societal values. Collaboration and Partnerships Think tanks identify pathways for collaborations with government agencies, NPOs, and private sector entities to facilitate the adoption of AI solutions in social impact areas. Education and Capacity Building Academia plays a vital role in educating the next generation of AI professionals and policymakers about the ethical and social implications of AI. Risks What are the different risks in AI deployment for the social sector? Technical Risks Legal / Governance / Ethical Risks Human Resource Risks Financial Risks Lack of awareness on biases in, and limitations of AI Lack of clear AI deployment roadmaps Heavy reliance on Big Tech for tech capacity Limited India-based AI models Risk of data-extractive practices for NPOs Limited open source data and reliable high-fidelity data AI solutions’ reliance on cloud-based platforms Potential of widening the digital divide for rural and marginalised communities Reliability concerns on non-transparent AI models Lack of visibility into the resources needed to develop and scale an AI based intervention Technical Risks India’s fragmented regulatory approach to AI Exposing users to privacy breaches, surveillance, or misuse of data Legal disputes over proprietary knowledge and models developed externally Lack of conversations around clear risk mitigation strategies Close involvement and control from funders may lead to biased, inequitable outcomes unaware of multi-fold data vulnerabilities Lack of structured data governance and AI risk management frameworks Rarity of third-party audits increase risks of unintended consequences Concerns around bias, ethical AI use, and compliance with data privacy laws Reinforcement of biases by AI models trained on non-diverse datasets Legal / Governance / Ethical Risks Human Resource Risks resource constraints in-house may lead to reliance on external tech partners AI deployment can result in job losses Lack of conviction and trust in AI deployed programs can lead to fragmented adoption NPOs struggle to access funding to experiment with AI Underestimation of ongoing costs by NPOs Financial Risks Sustained investment is required even past pilot stage Short-term, output-driven funding models fail to support the iterative and data-intensive nature of AI development Fair and effective use How can non-profit organisations ensure fair and effective use of AI? The three key principles for fair use of AI Tech should not be seen as one-size-fits-all Tech is a means to an end not an end in itself Tech can have unintended consequences – exacerbating inequities Steps NPOs can take for context-driven AI engagement User-centric design Participatory models (community involvement) and representative data Contextual understanding of the problem Continuous testing, refining, and iterating based on feedback Read Pacta's primer on " Mitigating Legal and Ethical Risks for non-profits in use of AI " here Recommendations Recommendations for strengthening AI adoption in the social impact ecosystem Recommendation 1 : Towards More Adoption and Awareness We need AI deployment frameworks and roadmaps to optimally leverage AI technology. Access to data, tech tools and tech infrastructure, human resource capacity, and stakeholder linkages, remain critical to ensure broadbasing of AI as a resource for all. HOW CAN DIFFERENT STAKEHOLDERS ACTION THIS RECOMMENDATION? Tech Enablers To support AI adoption in India’s social sector, tech enablers could create open, context-specific resources like deployment frameworks, case studies, playbooks, and failure files. They can also build databases of FOSS tools and service providers, offer capacity-building programs, develop affordable, low-bandwidth-friendly AI models, and establish knowledge-sharing platforms—making AI more accessible, ethical, and impactful for grassroots and non-profit organisations. Funder Government Funders should invest in the development of open resources, free and open-source AI models, and platforms to host and share these tools. Support should extend to capacity-building programs for NPOs, community-building initiatives, and dialogue among AI practitioners in the social sector. Funding should also enable NPOs to access essential infrastructure like hardware and computing power, while fostering multi-stakeholder collaboration through consortia of NPOs, tech enablers, and researchers to co-create impactful AI solutions. Government should focus on building local data centers, affordable storage infrastructure, and access to open-source data and foundational models trained on Indian datasets. Support should also go toward affordable, low-bandwidth AI tools, localised cloud computing, and sovereign AI initiatives. Policies should incentivise social impact-driven AI through startup grants, incubators, and support for tech builders and tech enablers, alongside funding research that explores AI’s ethical use and best practices in the social sector.

  • Data Protection for Children–Pathways towards Implementing Intentions of the DPDP Act | Pacta

    < Back Data Protection for Children–Pathways towards Implementing Intentions of the DPDP Act DOWNLOAD Previous Next

  • State of Unique Disability Identity Implementation in Karnataka | Pacta

    < Back State of Unique Disability Identity Implementation in Karnataka DOWNLOAD Previous Next

  • Data Insights on Decisions of India's Chief Commissioner for Persons with Disabilities in 2023 | Pacta

    < Back Data Insights on Decisions of India's Chief Commissioner for Persons with Disabilities in 2023 DOWNLOAD Previous Next

  • Disability Budgets In India 2023 | Pacta

    < Back Disability Budgets In India 2023 DOWNLOAD Previous Next

  • State Of Special Educators in India | Pacta

    < Back State Of Special Educators in India DOWNLOAD Previous Next

  • Tracing the Budgetary Allocations for Disability Sector in India | Pacta

    < Back Tracing the Budgetary Allocations for Disability Sector in India DOWNLOAD Previous Next

  • Early Analysis of Disability Data Gaps | Pacta

    < Back Early Analysis of Disability Data Gaps DOWNLOAD Previous Next

bottom of page