Posts by Tags

AI

Building an AI Team Part 2: Orchestrating sub-agents with the filesystem

less than 1 minute read

Published:

In the second part of the series, I dive into the technical coordination of my AI team, specifically using the local filesystem as a shared state mechanism. I explain how my sub-agents (responsible for tasks like research, coding, or review) communicate by reading and writing files in a structured way.

Building an AI Team: My journey from prompts to local agents

less than 1 minute read

Published:

This post details my transition from basic LLM prompting to a more structured “AI Team” approach. Frustrated by the limitations of single-prompt interactions, I built a system of specialized local agents using tools like Ollama and Llama 3. My focus is on creating autonomous agents that can handle specific parts of a workflow.

Agents

Building an AI Team Part 2: Orchestrating sub-agents with the filesystem

less than 1 minute read

Published:

In the second part of the series, I dive into the technical coordination of my AI team, specifically using the local filesystem as a shared state mechanism. I explain how my sub-agents (responsible for tasks like research, coding, or review) communicate by reading and writing files in a structured way.

Building an AI Team: My journey from prompts to local agents

less than 1 minute read

Published:

This post details my transition from basic LLM prompting to a more structured “AI Team” approach. Frustrated by the limitations of single-prompt interactions, I built a system of specialized local agents using tools like Ollama and Llama 3. My focus is on creating autonomous agents that can handle specific parts of a workflow.

Architecture

Building an AI Team Part 2: Orchestrating sub-agents with the filesystem

less than 1 minute read

Published:

In the second part of the series, I dive into the technical coordination of my AI team, specifically using the local filesystem as a shared state mechanism. I explain how my sub-agents (responsible for tasks like research, coding, or review) communicate by reading and writing files in a structured way.

Automation

Building an AI Team: My journey from prompts to local agents

less than 1 minute read

Published:

This post details my transition from basic LLM prompting to a more structured “AI Team” approach. Frustrated by the limitations of single-prompt interactions, I built a system of specialized local agents using tools like Ollama and Llama 3. My focus is on creating autonomous agents that can handle specific parts of a workflow.

Harvester

Why I Run Kubernetes on Top of Kubernetes: Rancher + Harvester

less than 1 minute read

Published:

I explain the benefits of a “Kubernetes-on-Kubernetes” architecture using Rancher and Harvester. I use Harvester as a bare-metal hyper-converged infrastructure (HCI) solution built on KubeVirt and Longhorn, providing a stable base for my virtualized workloads. Rancher then sits on top to orchestrate and manage multiple downstream Kubernetes clusters.

Infrastructure

Why I Run Kubernetes on Top of Kubernetes: Rancher + Harvester

less than 1 minute read

Published:

I explain the benefits of a “Kubernetes-on-Kubernetes” architecture using Rancher and Harvester. I use Harvester as a bare-metal hyper-converged infrastructure (HCI) solution built on KubeVirt and Longhorn, providing a stable base for my virtualized workloads. Rancher then sits on top to orchestrate and manage multiple downstream Kubernetes clusters.

Kubernetes

Why I Run Kubernetes on Top of Kubernetes: Rancher + Harvester

less than 1 minute read

Published:

I explain the benefits of a “Kubernetes-on-Kubernetes” architecture using Rancher and Harvester. I use Harvester as a bare-metal hyper-converged infrastructure (HCI) solution built on KubeVirt and Longhorn, providing a stable base for my virtualized workloads. Rancher then sits on top to orchestrate and manage multiple downstream Kubernetes clusters.

Autoscaling Made Easy with Rancher

less than 1 minute read

Published:

In this article, I explore how to implement seamless cluster autoscaling in RKE2 environments using Rancher’s Node Drivers and MachineDeployment resources. I highlight the use of the upstream Kubernetes Cluster Autoscaler (CA) as the core engine, which communicates with Rancher to dynamically provision or terminate nodes based on pod demand.

LLM

Building an AI Team: My journey from prompts to local agents

less than 1 minute read

Published:

This post details my transition from basic LLM prompting to a more structured “AI Team” approach. Frustrated by the limitations of single-prompt interactions, I built a system of specialized local agents using tools like Ollama and Llama 3. My focus is on creating autonomous agents that can handle specific parts of a workflow.

Platform Engineering

Autoscaling Made Easy with Rancher

less than 1 minute read

Published:

In this article, I explore how to implement seamless cluster autoscaling in RKE2 environments using Rancher’s Node Drivers and MachineDeployment resources. I highlight the use of the upstream Kubernetes Cluster Autoscaler (CA) as the core engine, which communicates with Rancher to dynamically provision or terminate nodes based on pod demand.

Rancher

Why I Run Kubernetes on Top of Kubernetes: Rancher + Harvester

less than 1 minute read

Published:

I explain the benefits of a “Kubernetes-on-Kubernetes” architecture using Rancher and Harvester. I use Harvester as a bare-metal hyper-converged infrastructure (HCI) solution built on KubeVirt and Longhorn, providing a stable base for my virtualized workloads. Rancher then sits on top to orchestrate and manage multiple downstream Kubernetes clusters.

Autoscaling Made Easy with Rancher

less than 1 minute read

Published:

In this article, I explore how to implement seamless cluster autoscaling in RKE2 environments using Rancher’s Node Drivers and MachineDeployment resources. I highlight the use of the upstream Kubernetes Cluster Autoscaler (CA) as the core engine, which communicates with Rancher to dynamically provision or terminate nodes based on pod demand.

Systems Design

Building an AI Team Part 2: Orchestrating sub-agents with the filesystem

less than 1 minute read

Published:

In the second part of the series, I dive into the technical coordination of my AI team, specifically using the local filesystem as a shared state mechanism. I explain how my sub-agents (responsible for tasks like research, coding, or review) communicate by reading and writing files in a structured way.