Here's the architecture diagram - I've realized at events we've visited that this is difficult to convey verbally. Rather than a typical "big-tech" AI solution that prioritizes compute and context window size within a single model, Pulse is an engine for any LLM that can improve efficiency (therefore time-to-response and power consumption) by as much as 80%. The engine also includes a full set of tools for quickly creating access control rules and assigning specific agents to respond to specific types of searches.

While I say "searches" I really mean "commands" - most of what we currently think of using AI for is search but Pulse (like other engines) has tool-use capabilities, for instance a "search" could actually be a series of outputs from a SCADA system (status of a machine in a factory) and the "response" could involve using a tool to deactivate a faulty machine or trigger an alarm.

That efficiency improvement, search, and access control system stacked together mean that Pulse can feasibly run entirely self-contained - say on a node at a factory with a few consumer-grade GPUs, our lab setup uses 3 3090s for example. A network of pulse nodes can work in an air-gapped environment to perform quality control on warheads just as effectively as it can become a replacement for existing social networking sites or services like doordash. It can even facilitate direct peer-to-peer commerce using existing solutions like Paypal.

I hope that helps - thank you for your time.

Made with love

  • The Pulse Team