Meet Aixo: a helpful service class to build AI functionality in modx

Hello all.

There have been a lot of very cool developments with modx and generative AI recently, so I thought I would share my latest thing.

Aixo is a service class that you can build AI plugins and snippets on top of. One benefit is that it the architecture of it allows you to connect to any provider you want (OpenAI, HuggingFace etc.) as well as local models (theoretically).

It works like this. First you have the main class in aixo/src/Aixo.php

<?php
namespace MODX\Aixo;

use MODX\Revolution\modX;
use MODX\Aixo\Providers\AixoProviderInterface;

class Aixo {
    /** @var modX */
    protected $modx;
    /** @var array<string, AixoProviderInterface> Loaded provider instances keyed by provider key */
    protected $providers = [];
    /** @var bool Debug mode flag */
    protected $debug = false;

    /**
     * Constructor: initialize Aixo service, load providers, and set debug mode.
     */
    public function __construct(modX $modx) {
        $this->modx = $modx;
        // Load debug setting (defaults to false if not set)
        $this->debug = (bool)$modx->getOption('aixo.debug', null, false);
        // Dynamically load and instantiate all providers
        $this->loadProviders();
    }

    /**
     * Load all provider classes from the providers directory and instantiate them.
     */
    protected function loadProviders(): void {
        $providersDir = __DIR__ . '/Providers';
        if (!is_dir($providersDir)) {
            return;
        }
        // Include all PHP files in the providers directory (except interface or abstract classes)
        foreach (glob($providersDir . '/*.php') as $providerFile) {
            if (strpos($providerFile, 'Interface.php') !== false || strpos($providerFile, 'Abstract') !== false) {
                continue; // skip interface and abstract base class files
            }
            require_once($providerFile);
        }
        // Instantiate each provider class that implements the interface
        foreach (get_declared_classes() as $className) {
            // Only consider classes in the MODX\Aixo\Providers namespace
            if (strpos($className, 'MODX\\Aixo\\Providers\\') === 0) {
                if (in_array(AixoProviderInterface::class, class_implements($className) ?: [])) {
                    /** @var AixoProviderInterface $provider */
                    $provider = new $className($this->modx);
                    // Use the provider's key (identifier) to store it
                    $key = strtolower($provider->getKey());
                    $this->providers[$key] = $provider;
                }
            }
        }
    }

    /**
     * Get a provider by name/key.
     */
    public function getProvider(string $name): ?AixoProviderInterface {
        $key = strtolower($name);
        return $this->providers[$key] ?? null;
    }

    /**
     * Return all loaded providers.
     * @return AixoProviderInterface[]
     */
    public function getProviders(): array {
        return $this->providers;
    }

    /**
     * Process an AI request using a specified or default provider.
     *
     * @param string $prompt    The input prompt/question for the AI.
     * @param string|null $providerName Optional provider key (e.g. "openai"); if null or empty, uses default provider.
     * @param array $options    Additional options (model, temperature, etc.) to override defaults.
     * @return string           The AI-generated response (or an empty string on failure).
     */
    public function process(string $prompt, ?string $providerName = null, array $options = []): string {
        if (trim($prompt) === '') {
            // No prompt provided
            return '';
        }
        // Determine which provider to use
        $providerKey = strtolower($providerName ?? $this->modx->getOption('aixo.default_provider', null, 'openai'));
        if (!isset($this->providers[$providerKey])) {
            // Provider not found
            $this->modx->log(modX::LOG_LEVEL_ERROR, "[Aixo] Provider '{$providerKey}' is not available (not installed).");
            return '';
        }
        $provider = $this->providers[$providerKey];

        // Check provider availability (e.g. API key configured)
        if (!$provider->isAvailable()) {
            $this->modx->log(modX::LOG_LEVEL_ERROR, "[Aixo] Provider '{$providerKey}' is not configured or available.");
            return '';
        }

        // Merge default model/temperature if not provided in options
        if (empty($options['model'])) {
            $options['model'] = $this->modx->getOption('aixo.default_model', null, '');
        }
        if (empty($options['temperature'])) {
            // Note: stored as string, ensure float
            $options['temperature'] = $this->modx->getOption('aixo.default_temperature', null, '0.7');
        }

        // Log the request if in debug mode
        if ($this->debug) {
            $this->modx->log(modX::LOG_LEVEL_INFO, "[Aixo] Request to provider '{$providerKey}' with prompt: " . $prompt);
        }

        // Perform the AI request via the provider
        $result = '';
        try {
            $result = (string) $provider->process($prompt, $options);
        } catch (\Exception $e) {
            // Catch any unexpected exception from provider
            $providerError = $e->getMessage();
            $provider->getLastError() || $providerError; // ensure lastError is populated if exception
            $this->modx->log(modX::LOG_LEVEL_ERROR, "[Aixo] Exception in provider '{$providerKey}': " . $providerError);
        }

        // Check for errors reported by provider
        $errorMsg = $provider->getLastError();
        if (!empty($errorMsg)) {
            // Log errors (always log errors, even if not in debug mode)
            $this->modx->log(modX::LOG_LEVEL_ERROR, "[Aixo] Error from provider '{$providerKey}': " . $errorMsg);
            // In debug mode, also note the prompt that caused it (already logged above)
        } else {
            // If no error and in debug mode, log the response
            if ($this->debug) {
                $this->modx->log(modX::LOG_LEVEL_INFO, "[Aixo] Response from '{$providerKey}': " . $result);
            }
        }

////WIP/////

         // Assume $response is the API response and $responseContent is the text to return.
    // Example: for OpenAI, $response may be an array or object containing 'usage'.
    $providerName = $options['provider'] ?? 'Unknown';
    $modelName    = $options['model'] ?? 'Unknown';
    $metadata     = $options['metadata'] ?? null;
    
    // Retrieve token count from API response
    $tokensUsed = 0;
    if (isset($response['usage']['total_tokens'])) {
        $tokensUsed = (int) $response['usage']['total_tokens'];
    } elseif (isset($response['token_count'])) {
        // If a different provider returns token count in another format
        $tokensUsed = (int) $response['token_count'];
    }
    
    // Log the token usage if we have data
    if ($tokensUsed > 0) {
        // Make sure the Aixo package is loaded for xPDO
        $this->modx->addPackage('aixo', $this->corePath . 'model/');
        // Create a new log object and set fields
        $usageLog = $this->modx->newObject('modAixoTokenUsage');
        if ($usageLog) {
            $usageLog->fromArray([
                'provider'  => $providerName,
                'model'     => $modelName,
                'tokens'    => $tokensUsed,
                'timestamp' => date('Y-m-d H:i:s'),
                'metadata'  => $metadata,
            ]);
            $usageLog->save();
        }
    }


        return $result;


        
    }
}

This does most of the heavy lifting for you. It basically takes your selected Provider class and then creates a new service using the modx DI Container called

$aixo = $modx->services->get('aixo');

You can use this new service is any snippet or plugin (or whatever) by just sending a prompt and handling the response:

// Process the prompt through Aixo's AI service
$response = $aixo->process($prompt, $provider, $options);

// Return the AI-generated response (or an empty string on error)
return $response;

Which is pretty simple.

But how do the Provider classes work? Well, it relies on a base called aixo/src/providers/AixoProviderInterface.php which looks like this:

<?php
namespace MODX\Aixo\Providers;

interface AixoProviderInterface {
    /**
     * A unique provider key (identifier used in settings and code, e.g. "openai").
     */
    public function getKey(): string;

    /**
     * Human-readable provider name (for display, e.g. "OpenAI API").
     */
    public function getName(): string;

    /**
     * Whether this provider is available for use (e.g. proper configuration in place).
     */
    public function isAvailable(): bool;

    /**
     * Process an AI prompt and return the response text.
     * @param string $prompt   The input text/prompt for AI.
     * @param array $options   Options such as model, temperature, etc.
     * @return string          The AI-generated response (empty string on failure).
     */
    public function process(string $prompt, array $options = []): string;

    /**
     * Get a message for the last error (if any) that occurred in process().
     * Returns an empty string if the last operation was successful.
     */
    public function getLastError(): string;
}

You then create an individual setup for each provider. As an example we could use OpenAI at aixo/src/providers/OpenAIProvider.php, which would look like this:

<?php
namespace MODX\Aixo\Providers;

use MODX\Revolution\modX;

class OpenAIProvider implements AixoProviderInterface {
    /** @var modX */
    protected $modx;
    /** @var string Last error message, or empty if none */
    protected $lastError = '';

    public function __construct(modX $modx) {
        $this->modx = $modx;
    }

    public function getKey(): string {
        return 'openai';
    }

    public function getName(): string {
        return 'OpenAI API';
    }

    public function isAvailable(): bool {
        // Check that an API key is set and cURL is available
        $apiKey = trim((string)$this->modx->getOption('aixo.api_key_openai', null, ''));
        if (empty($apiKey)) {
            return false; // No API key configured
        }
        if (!function_exists('curl_init')) {
            // cURL PHP extension not available
            return false;
        }
        return true;
    }

    public function getLastError(): string {
        return $this->lastError;
    }

    public function process(string $prompt, array $options = []): string {
        $this->lastError = '';  // Reset error
    
        // Ensure API key is available
        $apiKey = trim((string)$this->modx->getOption('aixo.api_key_openai', null, ''));
        if (empty($apiKey)) {
            $this->lastError = 'Missing OpenAI API key';
            return '';
        }
    
        // Determine model, temperature, and max tokens
        $model = $options['model'] ?? $this->modx->getOption('aixo.default_model', null, 'gpt-3.5-turbo');
        $temperature = $options['temperature'] ?? $this->modx->getOption('aixo.default_temperature', null, '0.7');
        $maxTokens = $options['max_tokens'] ?? $this->modx->getOption('aixo.max_tokens', null, '256');
    
        // Prepare the API request using the chat completions endpoint
        $endpoint = "https://api.openai.com/v1/chat/completions";
        $requestData = [
            'model' => $model,
            'messages' => [
                ['role' => 'system', 'content' => 'You are a helpful assistant.'],
                ['role' => 'user', 'content' => $prompt]
            ],
            'max_tokens'  => intval($maxTokens),
            'temperature' => floatval($temperature)
        ];
    
        // Initialize cURL
        $ch = curl_init($endpoint);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        // Set headers with content type and authorization
        curl_setopt($ch, CURLOPT_HTTPHEADER, [
            "Content-Type: application/json",
            "Authorization: Bearer {$apiKey}"
        ]);
        // Set timeouts (optional)
        curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 8);
        curl_setopt($ch, CURLOPT_TIMEOUT, 8);
        // Send data as JSON via POST
        $payload = json_encode($requestData);
        curl_setopt($ch, CURLOPT_POST, true);
        curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);
    
        // Execute request
        $responseBody = curl_exec($ch);
        if ($responseBody === false) {
            $this->lastError = 'cURL Error: ' . curl_error($ch);
            curl_close($ch);
            return '';
        }
        $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
        curl_close($ch);
    
        // Check for HTTP errors
        if ($httpCode !== 200) {
            $this->lastError = "OpenAI API error (HTTP {$httpCode}): $responseBody";
            return '';
        }
    
        // Decode JSON response
        $resultData = json_decode($responseBody, true);
        if (!$resultData) {
            $this->lastError = 'Invalid JSON response from OpenAI';
            return '';
        }
        if (!empty($resultData['error'])) {
            $this->lastError = "OpenAI Error: " . ($resultData['error']['message'] ?? 'Unknown error');
            return '';
        }
    
        // Extract the generated text from chat completions response
        if (!isset($resultData['choices'][0]['message']['content'])) {
            $this->lastError = 'No completion message found in response';
            return '';
        }
        return $resultData['choices'][0]['message']['content'];
    }
    
}

But if you want to use HuggingFace, their API setup is different and so is there response etc. So you would have to set up a different provider file such as aixo/src/HuggingFaceProvider.php

<?php
namespace MODX\Aixo\Providers;

use MODX\Revolution\modX;

class HuggingFaceProvider implements AixoProviderInterface {
    /** @var modX */
    protected $modx;
    /** @var string Last error message */
    protected $lastError = '';

    public function __construct(modX $modx) {
        $this->modx = $modx;
    }

    /**
     * Returns the unique provider key.
     */
    public function getKey(): string {
        return 'huggingface';
    }

    /**
     * Returns the human-readable provider name.
     */
    public function getName(): string {
        return 'HuggingFace API';
    }

    /**
     * Checks if the HuggingFace provider is available (e.g. API key is set).
     */
    public function isAvailable(): bool {
        $apiKey = trim((string)$this->modx->getOption('aixo.api_key_huggingface', null, ''));
        return !empty($apiKey) && function_exists('curl_init');
    }

    /**
     * Processes the AI prompt using the HuggingFace Inference API.
     *
     * @param string $prompt The input text prompt.
     * @param array $options Additional options; expects 'model' to be provided.
     * @return string The generated text (or empty string on failure).
     */
    public function process(string $prompt, array $options = []): string {
        $this->lastError = '';
        // Retrieve API key for HuggingFace from system settings.
        $apiKey = trim((string)$this->modx->getOption('aixo.api_key_huggingface', null, ''));
        if (empty($apiKey)) {
            $this->lastError = 'Missing HuggingFace API key';
            return '';
        }
        
        // Get model name from options or system setting.
        $model = $options['model'] ?? $this->modx->getOption('aixo.default_model_huggingface', null, 'gpt2');
        if (empty($model)) {
            $this->lastError = 'No model specified for HuggingFace';
            return '';
        }
        
        // Build the endpoint URL.
        $endpoint = "https://api-inference.huggingface.co/models/{$model}";
        
        // Prepare the payload.
        $data = [
            'inputs' => $prompt
        ];
        
        $payload = json_encode($data);
        $ch = curl_init($endpoint);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        curl_setopt($ch, CURLOPT_POST, true);
        curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);
        curl_setopt($ch, CURLOPT_HTTPHEADER, [
            'Content-Type: application/json',
            "Authorization: Bearer {$apiKey}"
        ]);
        // Optional: set timeouts
        curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 8);
        curl_setopt($ch, CURLOPT_TIMEOUT, 15);
        
        $responseBody = curl_exec($ch);
        if ($responseBody === false) {
            $this->lastError = 'cURL Error: ' . curl_error($ch);
            curl_close($ch);
            return '';
        }
        $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
        curl_close($ch);
        
        if ($httpCode !== 200) {
            $this->lastError = "HuggingFace API error (HTTP {$httpCode}): " . $responseBody;
            return '';
        }
        
        // Decode the JSON response.
        $resultData = json_decode($responseBody, true);
        if (!$resultData) {
            $this->lastError = 'Invalid JSON response from HuggingFace';
            return '';
        }
        if (isset($resultData['error'])) {
            $this->lastError = "HuggingFace Error: " . $resultData['error'];
            return '';
        }
        
        // Assume the generated text is in the 'generated_text' field,
        // but adjust based on the actual API response structure.
        if (isset($resultData[0]['generated_text'])) {
            return $resultData[0]['generated_text'];
        } elseif (isset($resultData['generated_text'])) {
            return $resultData['generated_text'];
        }
        
        $this->lastError = 'No generated text found in HuggingFace response';
        return '';
    }

    /**
     * Returns the last error message.
     */
    public function getLastError(): string {
        return $this->lastError;
    }
}

You could do the same for Anthropic, Gemini etc.

Let’s assume we stay with OpenAI. All you now need to do is to set a couple of System Settings:

aixo.api_key_openai : YOUR_API_KEY
aixo.debug : YES/NO // Verbose debug mode including number of tokens etc.
aixo.default_model : gpt-4o // Set this to whatever works for your Provider
aixo.default_provider : openai // You can overwrite this every time if you have multiple Providers
aixo.default_temperature : 0.6 // Default value, again can be overwritten at runtime
aixo.max_tokens : 16384 // I mean, sure, why not?

So now you are ready to build amazing AI things on top of this Multi-Provider LLM wrapper. Here is an example of a simple snippet which just takes a raw prompt and spits back the response:

<?php
/**
 * Aixo snippet: Call the Aixo AI service and return its response.
 *
 * This snippet accepts a prompt as either plain text or as a chunk reference.
 * For example:
 *   Plain text: [[!Aixo? &prompt=`What is the capital of Guatemala?`]]
 *   Chunk reference: [[!Aixo? &prompt=`[[$question]]`]]
 *
 * When a chunk reference is detected (by checking if the prompt starts with "[[$"
 * and ends with "]]"), the snippet will call $modx->getChunk() to render the chunk,
 * then pass its content to the Aixo service.
 *
 * Additional parameters (like &provider, &model, &temperature) override the global defaults.
 */



// Retrieve the prompt property
$prompt = $modx->getOption('prompt', $scriptProperties, '');
if (empty($prompt)) {
    return ''; // No prompt provided.
}

// Check if the prompt is a chunk reference in the format [[$chunkName]]
if (substr($prompt, 0, 4) === '[[$' && substr($prompt, -2) === ']]') {
    // Extract the chunk name (remove [[$ and ]])
    $chunkName = substr($prompt, 4, -2);
    // Render the chunk content
    $prompt = $modx->getChunk($chunkName);
    if (empty($prompt)) {
        return ''; // If the chunk is empty, return nothing.
    }
}

// Determine the provider to use (or fallback to the system default)
$provider = $modx->getOption(
    'provider',
    $scriptProperties,
    $modx->getOption('aixo.default_provider', null, 'openai')
);

// Gather additional options, e.g. model and temperature overrides
$options = [];
$model = $modx->getOption('model', $scriptProperties, '');
if (!empty($model)) {
    $options['model'] = $model;
}
$temperature = $modx->getOption('temperature', $scriptProperties, '');
if ($temperature !== '') {
    $options['temperature'] = is_numeric($temperature) ? (float)$temperature : $temperature;
}

// Retrieve the Aixo service from MODX's DI container
/** @var MODX\Aixo $aixo */
$aixo = $modx->services->get('aixo');
if (!$aixo) {
    $modx->log(modX::LOG_LEVEL_ERROR, '[Aixo] Aixo service is not available.');
    return '';
}

// Process the prompt through Aixo's AI service
$response = $aixo->process($prompt, $provider, $options);

// Return the AI-generated response (or an empty string on error)
return $response;

Slightly dirty, but it does the job.

But wait - there’s more!

I also created 2 widgets. One for the Status (to see which Providers you have set up correctly, which model you are using by default etc.) and one for Usage which gives the cumulative token usage for all your AI calls.

core/components/aixo/elements/widgets/widget.aixo-status.php

<?php
/**
 * Aixo Status Dashboard Widget
 * Displays the default configuration and availability of AI providers.
 */

/** @var modX $modx */
$output = '';
// Attempt to get Aixo service
$aixo = $modx->services->has('aixo') ? $modx->services->get('aixo') : null;
if (!$aixo) {
    // If Aixo service isn't available, perhaps the extra isn't installed correctly
    $output .= '<p style="color:red;"><strong>Aixo service is not initialized.</strong></p>';
    return $output;
}

// Get system settings
$defaultProvider = $modx->getOption('aixo.default_provider', null, '(none)');
$defaultModel    = $modx->getOption('aixo.default_model', null, '');
$defaultTemp     = $modx->getOption('aixo.default_temperature', null, '');
$debugMode       = (bool) $modx->getOption('aixo.debug', null, false);

// Start building HTML output
$output .= '<h3>Aixo Configuration</h3>';
$output .= '<p><strong>Default Provider:</strong> ' . htmlspecialchars($defaultProvider) . '</p>';
$output .= '<p><strong>Default Model:</strong> ' . htmlspecialchars($defaultModel) . '</p>';
$output .= '<p><strong>Default Temperature:</strong> ' . htmlspecialchars($defaultTemp) . '</p>';
$output .= '<p><strong>Debug Mode:</strong> ' . ($debugMode ? 'On' : 'Off') . '</p>';

// List available providers and their status
$output .= '<h4>Available Providers:</h4><ul>';
$providers = $aixo->getProviders();
if (!empty($providers)) {
    /** @var MODX\Aixo\Providers\AixoProviderInterface $prov */
    foreach ($providers as $key => $prov) {
        $name = $prov->getName();
        $status = $prov->isAvailable() ? '✅ Ready' : '⚠️ Not Configured';
        // If this provider is the default, mark it
        $mark = ($key === strtolower($defaultProvider)) ? ' (default)' : '';
        $output .= '<li><strong>' . htmlspecialchars($name) . ":</strong> {$status}{$mark}</li>";
    }
} else {
    $output .= '<li>No providers loaded.</li>';
}
$output .= '</ul>';

// You can include additional info if needed, e.g., last run status or version.
return $output;

core/components/aixo/elements/widgets/widget.aixo-usage.php

<?php
/** @var modX $modx */
$modx->addPackage('aixo', $modx->getOption('core_path').'components/aixo/model/');

// Get the most recent usage log
$c = $modx->newQuery('modAixoTokenUsage');
$c->sortby('timestamp', 'DESC');
$c->limit(1);
$lastEntry = $modx->getObject('modAixoTokenUsage', $c);

if ($lastEntry) {
    $lastProvider = $lastEntry->get('provider');
    $lastModel    = $lastEntry->get('model');
    $lastTokens   = $lastEntry->get('tokens');
    $lastTime     = $lastEntry->get('timestamp');
    $lastInfo = "Last Request: {$lastProvider} (model {$lastModel}) used {$lastTokens} tokens at {$lastTime}.";
} else {
    $lastInfo = "Last Request: (no data yet)";
}

// Aggregate total tokens per provider+model
$statsList = [];
$q = $modx->newQuery('modAixoTokenUsage');
$q->select([
    'provider',
    'model',
    'SUM(`tokens`) AS total_tokens',
]);
$q->groupby('provider');
$q->groupby('model');
if ($q->prepare() && $q->stmt->execute()) {
    $rows = $q->stmt->fetchAll(PDO::FETCH_ASSOC);
    foreach ($rows as $row) {
        $prov = $row['provider'] ?: 'Unknown';
        $mod  = $row['model'] ?: 'Unknown';
        $total = (int) $row['total_tokens'];
        $statsList[] = "{$prov} (model {$mod}): {$total} tokens";
    }
}

// Build HTML output
$output = "<div class='aixo-token-stats'>";
$output .= "<p><strong>{$lastInfo}</strong></p>";
if (!empty($statsList)) {
    $output .= "<h4>Total Tokens Used (by Provider/Model):</h4><ul>";
    foreach ($statsList as $line) {
        $output .= "<li>{$line}</li>";
    }
    $output .= "</ul>";
}
$output .= "</div>";

return $output;

Hopefully, with these tools at your disposal, you can build some cool AI stuff in modx.

Also, if anyone feels like this might make a good Extra to be installable via the Package Manager, let me know - as I have tried and failed to get this to build properly. Any assistance appreciated. Thx, and enjoy.

4 Likes

Just to crosslink to other AI wrappers released in this week:

… and there’s more :smile:

Some 25 people just spent this weekend in the Swiss alps talking about AI and different ways to use it so there’s definitely a lot coming out.

As a small code suggestion, I would recommend staying away from using the MODX root namespace. Give it your own root namespace.

1 Like

Most Excellent!!!

1 Like

Thanks @markh. It would be great if we could start a list somewhere of all the AI initiatives/Extras that are bring built for MODX.

I was sadly not able to make it to Snowup this year, but I heard it was a great event - awesome work by @henk_heibel and @achterbahn ! I created a little demo video for Snowup to show the AI localisation plugin LocalAIze: https://vimeo.com/1061692749

Good call about the root namespace. Thanks.

I was hoping, however, that something like this i.e. a simple, model-agnostic LLM wrapper with transparent usage which exposes a flexible MODX service, would find it’s way into the MODX core so that everyone who wants to build something to do with AI could build on top of the same foundation.

I understand that this is something that modAI is partly trying to do - but I like the idea of separating the service from the actual functionality. So while modAI is great for generating and refining content within Resources - how easy would it be to bend it into a coding copilot for snippets and plugins? Would it be possible to build it out into more of an AI agent-type workflow?

What are your thoughts on this?

Ahh I didn’t realise that was you but we did indeed watch that! Was cool to see :smiley:

Thanks for being part of SnowUp that way even if you couldn’t physically be here.

I fully agree that’s something we need… that is exactly why I started working on AIKit. With so many cool AI implementations popping up, having them all be independent means lots of duplicate work on basic things (calling an API is not the hard part of AI), and inevitably it’ll become a mess where some features are in one, but not the other implementation, they all use different system prompts, they can’t talk to each other… etc.

Having that one foundation that everything can build on and that we as community collaborate on is so important.

(I don’t think it necessarily has to be in the core, AI moves so quickly these days, that I think tying it to the core release cycle is going to hold everyone down.)

I’m obviously biased but AIKit being built around the assistant, which allows re-fining, function calling, and since yesterday vector storage too (initially via Pinecone, but with adapters) feels like the right direction.

modAI looks cool and has some things AIKit or others don’t (image generation is one), but it’s all based on one-shot prompting, with no function calling, so from what I can tell so there’s no refining or natural conversation to do things. It can be made to do that, I’m sure, but then we’re again duplicating lots of effort.

It only took me like 10 minutes to add one-shot prompts to AIKit that mimic what your snippet does and exposes the model as a service. And those one-shot prompts are still going to use the knowledge and actions available to the assistant.

The next major thing I’m working on is the JS-API that lets any extra trigger the assistant with a prompt. So when you have things like modAI looking to generate content for fields, or modxai1y to generate alt tags for an image, or any other case where you want to generate/parser something - that would be able to do all of that with just a couple lines of code. That will get a callback when the user is happy with the result and clicks a button in the assistant.

One-click prompt → response interactions would also be as easy as calling the prompt method.

Obviously there’s lots of other features still to build, like image generation/vision, more functions to perform actions in the manager, your dashboard widgets look cool too (and it’s already tracking tokens per conversation). But yeah, hopefully I’ll get some help building all that and can co

…and sorry for taking over your thread, I should probably create my own thread about AIKit and talk about it there instead :laughing:

1 Like

Hahah - no worries about taking over the thread - I think it’s an important topic. I really like the approach you seem to be taking with AIKit i.e. creating a general model or vector DB interface and then allowing people to just create new adapters on top of that for OpenAI, Anthropic, Gemini etc. I think this is definitely the way to go.

I guess my slight point of contention would be including a bunch of tools in the AIKit Extra (stuff like CreateResource etc.). My suggestion here would be to separate this functionality into a different Extra so that people ban build whatever tool THEY want on top without ever having to touch the AIKit code itself.

In a way, I see LLM model interfaces becoming something like xPDO in modx: a flexible, extensible bridge that queries a model in a certain way and handles the response in a certain way, but that can also maintain context, is suitable for building complex prompt chains, RAG pipelines, agents etc. on top of.

Also I love the fact that AIKit is enabling embeddings. Did you also look into using Chroma instead of Pinecone? I like the idea of using Open Source DBs as much as possible. What are your thoughts? I also assume a different model would be needed for the embedding transformations, right?

In my opinion, AI in MODX can mean more than an LLM wrapper. It should be a fundamental, deeply integrated feature concept that goes through the entire core of the CMS. I am excited to hear the ideas and vision of others here.

100%. And already possible; a package just needs an autoloader (called for example in a namespace bootstrap.php, and to register the tools during installation.

Something like CreateResource is fine in the core package IMO as it’s a core feature, but I am already plotting for example a Mailchimp integration that would allow creating a new draft newsletter that would be a standalone integration. And regardless of core or third party, the idea is that you can use the configuration interface at some point to configure different tools and turn them on/off as desired.

Yess!!

I have not looked into Chroma yet (besides a 5-min search just now), heck I didn’t know much about Pinecone until a few days ago and only got that working thanks to help from people, but totally. It just needs a simple interface implementation and changing the system setting.

Pinecone did have the option to take care of the embeddings so I took advantage of that to keep the interface as simple as possible. Totally open to adding in an embedding model utility for OpenAI or other models though.

So many cool things going on on the AI front, indeed! I would love to sync up on things as well so MODX becomes the ultimate AI-first platform for making amazing sites. So cool to see what you’ve done @markh and @digitalime and can’t wait to see the videos from the SnowUp.

1 Like

OK. So I messed around a bit and managed to make the OpenAI Provider multimodal (I also renamed the snippet to AixoGen to avoid confusion). Aixo is the class, and AixoGen is the snippet built on top of it.

The following snippet calls:

<p>[[$question]]<br> 
<em>[[!AixoGen? &prompt=`[[$question]]`]]</em></p>


<p>A cute cat playing Handball<br> 
<img style="width:300px; height:auto" src="[[!AixoGen? &prompt=`A cute cat playing Handball` &task=`image` &model=`dall-e-3`]]"></p>

Gives the following output:

And it works when you have an image input as well (useful for captions)

Image: <img src="https://upload.wikimedia.org/wikipedia/commons/7/7e/Blue_Ghost_Mission_1_rendering.jpg">
<br><small>https://upload.wikimedia.org/wikipedia/commons/7/7e/Blue_Ghost_Mission_1_rendering.jpg</small>
<p>Caption:<em>[[!AixoGen? &prompt=`please write a caption for this image: https://upload.wikimedia.org/wikipedia/commons/7/7e/Blue_Ghost_Mission_1_rendering.jpg`]]</em></p>

Give us this

I also managed to make the HuggingFace kinda work (maybe) - which leads to fun errors like this

[Aixo] Error from provider 'huggingface': HuggingFace API error (HTTP 403): {"error":"The model deepseek-ai/DeepSeek-R1 is too large to be loaded automatically (688GB > 10GB)."}

Since downloading huge models locally is always going to be a challenge - this will remain a work in progress.

To avoid filling this thread with code, I have put everything in a repo: GitHub - danielwesterlund/aixo: Aixo is an AI helper class for MODX

I would love some help in getting this built into a transport package, as I have not been able to successfully do that yet.

@digitalime Just a comment on the general implementation, especially when it comes to front-facing snippets, I’d be cautious around using PHP as the request mechanism for AI. Similar to relying on any external service for frontend, you want to have a heavy slathering of low timeouts and caching. Long-running requests can kill a site or server very fast if they get even a small spike in traffic.

Thanks @matdave! Can confirm that this has been an issue even in local development. Managed to hack past it by just upping my CURL timeouts - but I know this is not a particularly good solution when we scale. What approach would you suggest?

I would also be interested in getting more ideas about how to make the requests more economical. Is there a way to use the native MODX cache mechanism to persist responses we don’t particularly want to write to the DB, but still don’t want to send as an expensive API request every time?