<?xml version="1.0" encoding="iso-8859-1"?>
<rss version="2.0"><channel><title>RasadaCrea rss feeds aggregator</title><link>http://www.rasadacrea.com</link><description>rss feed aggregated news on web services and technologies by RasadaCrea France : Category en_web sites company</description><lastBuildDate>Tue, 28 Apr 2026 15:31:19 GMT</lastBuildDate><generator>PyRSS2Gen-1.0.0</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Hermes Desktop Is a GUI for Hermes Agent</title><link>https://www.hongkiat.com/blog/hermes-desktop-gui-for-hermes-agent/</link><description>A new app called Hermes Desktop makes Hermes Agent easier to use for people who do not want to stay in the terminal. If you have been looking for a cleaner way to install and use an agent on your own machine, this sits in the same wider conversation as how people run AI locally with lighter setup friction. 
 This is not an official Nous Research desktop app. 
 
 
 
 Hermes Desktop is a separate open-source project created by GitHub user fathah. It sits on top of Hermes Agent and gives it a native interface for setup, chat, and day-to-day management. 
 It is a third-party desktop companion for Hermes Agent, not the upstream project itself. 
 What Is Hermes Desktop? 
 Hermes Desktop is a native app for installing, configuring, and chatting with Hermes Agent without doing everything by hand from the command line. 
 According to the project's GitHub repo, it uses the official Hermes install script, stores Hermes under ~/.hermes , and provides screens for chat, sessions, profiles, memory, skills, tools, schedules, and messaging gateways. 
 That makes it less of a simple wrapper and more of a desktop control panel for Hermes. 
 What Can the App Do? 
 Based on the repo and release notes, Hermes Desktop already goes well beyond a basic chat .. cntd</description><pubDate>Wed, 22 Apr 2026 10:00:00 GMT</pubDate></item><item><title>OpenClaw vs Hermes Agent: Which One Should You Choose?</title><link>https://www.hongkiat.com/blog/openclaw-vs-hermes-agent/</link><description>The open-source AI agent space got crowded fast in 2026, but two names kept showing up in the same conversations: OpenClaw and Hermes Agent . 
 At first glance, they look like direct rivals. They're both open-source. They both run on your own hardware or a cheap VPS. They both promise a more useful kind of AI assistant than the usual chatbox. 
 But after spending time with both, I don't think the real question is which one kills the other . That framing is lazy. 
 The better question is this: what job do you want the agent to do? 
 Because OpenClaw and Hermes Agent are built around different ideas. 
 OpenClaw feels like a capable runtime for getting things done across apps, channels, and workflows. Hermes feels more like an agent that is trying to become better at being itself. 
 That difference matters. 
 The Short Version 
 If you want a practical assistant that can live in Telegram, WhatsApp, Discord, email, the browser, and your shell, OpenClaw makes a lot of sense . 
 If you want an agent with stronger memory, a built-in self-improvement loop, and a setup that invites experimentation with lots of models, Hermes Agent is the more interesting bet . 
 And if you're deep enough into this space to care about both orchestration and .. cntd</description><pubDate>Sun, 12 Apr 2026 13:00:00 GMT</pubDate></item><item><title>Armin Ronacher: The Center Has a Bias</title><link>https://lucumr.pocoo.org/2026/4/11/the-center-has-a-bias/</link><description>Whenever a new technology shows up, the conversation quickly splits into camps.
There are the people who reject it outright, and there are the people who seem
to adopt it with religious enthusiasm. For more than a year now, no topic has
been more polarising than AI coding agents. 
 What I keep noticing is that a lot of the criticism directed at these tools is
perfectly legitimate, but it often comes from people without a meaningful amount
of direct experience with them. They are not necessarily wrong. In fact, many
of them cite studies, polls and all kinds of sources that themselves spent time
investigating and surveying. And quite legitimately they identified real
issues: the output can be bad, the security implications are scary, the
economics are strange and potentially unsustainable, there is an environmental
impact, the social consequences are unclear, and the hype is exhausting. 
 But there is something important missing from that criticism when it comes from
a position of non-use: it is too abstract. 
 There is a difference between saying "this looks flawed in principle" and saying
"I used this enough to understand where it breaks, where it helps, and how it
changes my work." The second type of criticism is expensive. It costs .. cntd</description><pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate></item><item><title>OpenScreen Is the Free Open-Source Alternative to Screen Studio</title><link>https://www.hongkiat.com/blog/openscreen-screen-studio-alternative/</link><description>If you make product demos, tutorials, walkthroughs, or short social clips, there's a good chance you've looked at Screen Studio before. 
 And for good reason. 
 It's one of the nicest screen recording tools around if you care about presentation. You record your screen, and it handles a lot of the polish for you: cursor-following zooms, smooth motion, clean framing, and a result that looks far better than a raw screen capture usually has any right to. 
 The problem, of course, is that it's not free. 
 If you only make polished demos once in a while, another subscription can feel a bit ridiculous. That's exactly where OpenScreen comes into play. It aims at the same kind of workflow, but it's free, open source, and available across macOS, Windows, and Linux. 
 
 That alone makes it worth a look. 
 What Is OpenScreen? 
 OpenScreen is an open-source desktop app built for turning ordinary screen recordings into cleaner, more watchable demos. It is positioned very clearly as an alternative to Screen Studio, and the overlap is obvious the moment you look at it. 
 You can record your screen or a specific window, then refine the result with zooms, cursor effects, backgrounds, annotations, and timeline-based edits. In other words, it is not just .. cntd</description><pubDate>Fri, 10 Apr 2026 13:00:00 GMT</pubDate></item><item><title>Codex vs Claude Code in 2026: Which Actually Saves You Money?</title><link>https://www.hongkiat.com/blog/codex-vs-claude-code-2026/</link><description>If you have been watching the AI coding tool space, you know the story by now. OpenAI put Codex into ChatGPT. Anthropic shipped Claude Code . Both will write your code, debug your mess, refactor your spaghetti, and run agentic tasks while you grab coffee. 
 But they price differently. And usage limits? That is where it gets interesting, and where a lot of comparisons fall apart. 
 I spent time with both. Here is what I found. 
 Pricing 
 Both start at $20/month. That is where the similarity ends. 
 
 
 
 Plan 
 OpenAI Codex 
 Cost 
 Claude Code 
 Cost 
 
 
 
 
 Base 
 ChatGPT Plus 
 $20 
 Claude Pro 
 $20 
 
 
 Mid 
 ChatGPT Pro 
 $200 
 Claude Max 5x 
 $100 
 
 
 Heavy 
 None 
 &#8211; 
 Claude Max 20x 
 $200 
 
 
 Teams 
 ChatGPT Business 
 ~$25-30/user 
 Claude Team 
 Custom 
 
 
 
 Prices updated at time of writing. 
 A few things worth noting: 
 
 Codex has no standalone pricing. It is bundled into ChatGPT plans. If you want it, you are on Plus ($20) or Pro ($200). 
 Claude Code has a sweet mid-tier at $100. The Max 5x plan is genuinely compelling for power users who do not want to jump to $200. 
 OpenAI also has a lighter ChatGPT Go tier around $8/month, casual users only, and Codex access is reduced there. 
 
 On paper, $20 gets you .. cntd</description><pubDate>Tue, 07 Apr 2026 13:00:00 GMT</pubDate></item><item><title>Graham Dumpleton: Reviewing workshops with AI</title><link>https://grahamdumpleton.me/posts/2026/02/reviewing-workshops-with-ai/</link><description>In my previous post I walked through deploying an AI-generated Educates workshop on a local Kubernetes cluster. The workshop was up and running, accessible through the training portal, and ready to be used. But having a workshop that runs is only the first step. The next question is whether it's actually any good. 
 Workshop review is traditionally a manual process. You open the workshop in a browser, click through each page, read the instructions, run the commands, check that everything works, and make notes on what could be improved. It's time-consuming and somewhat tedious, especially when you're the person who wrote the workshop in the first place and already know what it's supposed to do. Even this task, though, is one where AI can help. 
 Reviewing the source vs the experience 
 One option would be to point Claude at the workshop source files directly. Hand it the Markdown content and the YAML configuration and ask it to review the material. This works to a degree, but it only checks the content in isolation. It doesn't tell you anything about how the workshop actually feels when someone uses it. 
 The real test of a workshop is the experience of navigating it as a learner. How do the instructions read when you're looking at .. cntd</description><pubDate>Sat, 28 Feb 2026 07:39:52 GMT</pubDate></item></channel></rss>