Today I want to share Offload, a javascript SDK to run AI directly on the users' browser.
It is an SDK you can use to add AI to your website but with one peculiarity: it allows your users to run AI tasks locally, keeping their data on their devices, avoiding the need to send it to a third-party inference API.
Additionally, it decreases your costs and helps your application scale inexpensively. As more inference is offloaded to the users' devices, the fewer resources you need to allocate or spend on third-party APIs.
If you are an application developer, integrating Offload will only improve your application, as it will continue to work as usual while offering your users the ability to process their data locally, without any effort on your part.
You can integrate Offload as a direct replacement of whatever SDK you are using right now, just changing your inference function calls.
Offload serves** models of different sizes to your users automatically**, depending on the device and its resources. If the user's device does not have enough resources, Offload will not show that user the option to process the data locally and will fall back to whatever API you specify via the dashboard.
In the dashboard, you can configure and manage the prompts, customize and test them for the different models, and get analytics from the users, and more. Everything without exposing your users' data to any third party, as everything is processed on-device.
Offload supports generating text responses, enforcing structured data objects via JSON schemas, streaming the text response, and more.
If there's anything else we do not support that you'd like to see, please leave a comment!
I believe local AI is the future. However, as AI continues to advance, I am increasingly concerned about how our data is processed.
Every application that implements an AI feature today uses a remote API, where it sends the users' data. Most of these applications use public APIs such as OpenAI, Anthropic, and others. The flow is simple: the application collects the user data and sends it along with the prompt to the remote API, which replies with the generated text or image.
The big problem with this approach is that when you give an application access to a document, (or photo, video, or any piece of data), it sends your document to a remote API, which may include any sensitive information it contains. The remote API likely records the prompts, uses the data to train new models, or sells your data for other purposes.
I think the data privacy problem is even worse now that we have LLMs. LLMs allow indexing huge amounts of unstructured information in new ways that weren't possible before, and this increases the danger of exposing any personal piece of information.
For example, let's say you have a diary. It likely includes where you live, your schedules, who your friends are, where you work, maybe how much you earn, and much more. Even if not written directly, it can probably be inferred from the diary's content. Up until now, to infer that information, someone would need to read it entirely. However, with LLMs, one could gain enough data to impersonate you in seconds.
By using an app to chat with your diary, you are potentially exposing your information, as it is sent to some API.
On the other hand, if such an application uses Offload, you can use it securely since your data doesn't leave your device, and thus, it cannot be exposed.
This is especially important in industries that work with highly sensitive data, such as healthcare, legal, document processing apps, personal assistants, etc.
Integrate Offload in your application today!
Disclaimer: All resources provided are partly from the Internet. If there is any infringement of your copyright or other rights and interests, please explain the detailed reasons and provide proof of copyright or rights and interests and then send it to the email: [email protected] We will handle it for you as soon as possible.
Copyright© 2022 湘ICP备2022001581号-3