Seems to make sense for maximum privacy. Put together a large enough model to answer health queries, have OCR and image recognition to read exams, give it web access to search for medication details and, of course, gather raw data from any devices you use to measure weight, heart rate, etc.
But that’s just in theory and we all know things are hard to put together. In practice, have you had any experience getting anything like this working locally?


What part requires perfect precision?
If you want to parse sensor data, you do it in code before the LLM sees it.