In the OpenClaw ecosystem, local-first AI assistants are transforming how developers approach macOS app creation. A recent experiment demonstrates this shift, where a user leveraged a 128GB M5 MacBook Pro to run capable local LLMs and build custom system monitoring tools through vibe coding. This process, which involves minimal manual coding and relies heavily on AI agents, highlights the potential of platforms like OpenClaw to democratize app development for those without extensive Swift or macOS internals knowledge.
The journey began with frustration over Activity Monitor, leading to the development of two SwiftUI apps: Bandwidther and Gpuer. Bandwidther was created to monitor network bandwidth usage, specifically to check if Dropbox was transferring files via LAN or downloading from the internet. The user started with simple prompts, such as requesting a native SwiftUI app to show live network details, and used git to track progress. Claude, an AI agent, suggested features like per-process bandwidth and a two-column layout, evolving the app into a menu bar icon that opens an information panel.
Simultaneously, Gpuer was built to monitor GPU and RAM usage, addressing gaps in Activity Monitor. By referencing the in-progress Bandwidther app, the AI agent recombined elements to create a similar tool, using commands like system_profiler and memory_pressure. This approach of having agents learn from existing projects is a key advantage in the OpenClaw framework, enabling rapid prototyping and iteration without deep coding expertise.
However, a critical lesson from this experiment is the need for caution. The user emphasized that these apps are classic vibe coding examples, where they don’t know Swift and barely reviewed the code. With limited experience in macOS internals, they are unqualified to verify the accuracy of the numbers and charts output by these tools. For instance, Gpuer once reported only 5GB of memory left when Activity Monitor indicated otherwise. After pasting a screenshot into Claude Code for adjustments, the numbers improved, but confidence in their correctness remains low. Warnings have been added to the GitHub repositories to reflect this uncertainty.
Despite these reliability concerns, the projects yielded valuable insights for the OpenClaw community. A SwiftUI app can accomplish significant functionality within a single file, as seen with GpuerApp.swift at 880 lines and BandwidtherApp.swift at 1063 lines. Wrapping terminal commands in a Swift-based UI is straightforward, and AI agents like Claude exhibit good design taste for SwiftUI applications. Converting an app to a menu bar icon requires only a few extra lines of code, and importantly, Xcode is not necessary for building such applications.
These apps were developed quickly, convincing the user that building macOS apps in SwiftUI is a viable new capability for future projects. In the OpenClaw ecosystem, this aligns with the vision of empowering users to leverage local AI agents for automation and tool creation, even without traditional programming backgrounds. The ability to spin up functional apps through minimal prompting and agent collaboration opens doors for personalized workflows and enhanced productivity.
As the OpenClaw platform evolves, integrating such vibe coding techniques could further enhance its plugin ecosystems and agent automation features. By focusing on local-first AI, users can maintain control over their data and tools, while benefiting from the creative and technical assistance of AI agents. This experiment serves as a testament to the transformative potential of combining SwiftUI with intelligent assistants in a secure, open-source environment.


