The Brilliance Behind The Claw

Poking around with some of this "claw" stuff today. I'm not going to rant about the security implications or any such controversy surrounding the tech. There are a million other posts that talk about the risks of these sorts of frameworks, not only here, but all over the internet. Nonetheless, if you do decide to explore OpenClaw or any of its now numerous clones or derivatives, take proper precautions. It's not scary if you take the time to understand the risks and account for them.

However, I would like to point out what makes this whole "claw" concept really interesting to me. Take a look at NanoClaw's "Contributing" section: https://github.com/qwibitai/nanoclaw?tab=readme-ov-file#contributing Not the typical thing you'd see in an open-source README, but I think it is a perfect example of how our mindset around software development is changing.

If you want to expand on the technology, don't start writing code. Instead, explain to the AI what it is you want to change. Want to add a feature? Describe that feature with clear and concise language. Concerned about security? Explain your concerns and mitigate the risks with the AI at your side.

"That sounds like a security nightmare!"

Well, it certainly could become one. But I'd point out that we humans have conclusively demonstrated that you don't need an AI to create a security nightmare. We're actually pretty darn good at doing that ourselves. This has not prevented us from writing good software.

We put controls in place, perform quality assurance, we do all sorts of things to make sure our potentially flawed and purely human creations are as solid and secure as possible. If security issues are discovered post-release, we address the situation with prompt and appropriate responses. These types of procedures are well established and finely honed from decades of experience. This is exactly how we should approach the subject of potential security issues in AI-generated systems.

Now, do these "claws" address their own security issues like this? No, not really, at least not yet. But these things are just one present specimen in a huge and rapidly changing evolutionary tree. New AI-powered technologies are evolving so fast, we can't discount the value of a novel approach to software development based on problems we see this week, because next week, the situation may be completely different!

I don't think I'm going to be building my own personal AI assistant today. Mostly because I don't need one. But I also have some ideas on how these things could be make more "trustable", and I'd like to explore those for a bit. If I come up with something interesting, I'll post more on the subject.