Possible directions and bets for the MVP
Research and development is one of the regular operational activities of most DAOs which is part of new process development, systems development, and gathering information for the creation of proposals, and other important resources for the DAO.
Lido DAO research and process is an example of the need for active and impactful research within DAOs.
Ideally, these operators would prefer productivity in their research process over information overload.
An example of how this has been seen is a case when a Head of DAO operations in a popular DAO wanted to develop a model for running their RFP programs in the DAO. Put up a quick bounty calling for someone to dive into the information pool to develop a concise report on the processes other DAOs have structured their RFP programs.
This was a need for a simplified/up to date research output. If an intuitive platform can be developed to help more operators do effective research, this will empower teams and individual researchers and save them an average of 72 hours of research.
There are options currently for this kind of research which we believe can make an impact in the research but don't build in tools that fit in the context of these DAOs.
Solution: an initiative to build a simple tool that will solve this
Microsense can be a simple and intuitively designed tool to help DAO operators make effective research.
Building functions around websites/apps crawling for DAOs and training the system with Governance related knowledge banks and implementing already existing processes like creation of proposals. Features like, connecting the GPT/Gemini powered model generations to your Notion workspace to easily format and, easy migration from result pages to proposal platforms like Discourse. Simple and powerful stuff.
Some use of this can be learning about different ecosystems across web3, developing a new DAO working/operating model from these well trained and updated models. For better understanding an ecosystem based on real time info generated from data across forums, communities and knowledge repositories. So in other words we’ll be continuously optimizing our models for quality. The knowledge bank fed into the model can be ecosystem specific and can be easily uploaded in the system by users using methods like file upload, image detection, sound etc.
The model will be like a container with specific models of algorithms ready to train based on knowledge bank fed to it. A regular user can Import a bulk of info/knowledge bank and then query the system based on that knowledge bank and be able to export well formatted summaries of their queries. Simplicity and Quality (the quality control of the Microsense model isn't meant to be as dumb as gpt 3.5) will be the unique selling point. A user should be able to have their best research experience and responses using Microsense
Another optimization will be If we can make this interact so well with advanced Notion and other collaborative tools, it'll be unstoppable.