This update is a broad maintenance and feature pass across several parts of the bot: admin tooling, user commands, public help, and OpenAI/tellme configuration.
The goal was not only to fix bugs, but also to make the bot easier to operate from IRC, safer with sensitive data, and more pleasant to use day to day.
The OpenAI/tellme integration now has a much stronger Owner-only administration layer.
The new openai command can show, explain, test, and update safe runtime settings without editing Perl code.
Examples:
m openai help
m openai status
m openai defaults
m openai explain model
m openai explain system_prompt
m openai set model gpt-4o-mini
m openai set temperature 0.6
m openai set max_tokens 700
m openai set max_privmsg 5
m openai set wrap_bytes 360
m openai set sleep_us 500000
m openai reset model
The API key is intentionally not editable from IRC. That is by design: secrets typed on IRC can end up in logs, bouncers, debug consoles, or client history.
The command reports whether the API key exists, but never prints the key itself.
The tellme/chatGPT behavior is now configurable from the [openai] section of mediabot.conf.
The sample configuration documents the important values:
API_URL=https://api.openai.com/v1/chat/completions
MODEL=gpt-4o-mini
FALLBACK_MODEL=
SYSTEM_PROMPT=...
TEMPERATURE=0.7
MAX_TOKENS=400
MAX_PRIVMSG=4
WRAP_BYTES=400
SLEEP_US=750000
This makes it possible to tune the bot without patching Perl:
The bot can now use a fallback OpenAI model when the primary model is unavailable or forbidden.
Example:
m openai set model gpt-5
m openai set fallback_model gpt-4o-mini
If the primary model returns HTTP 400, 403, or 404, tellme retries once with the configured fallback model.
This is especially useful when testing models that may or may not be available to the current OpenAI API project.
The openai test command gives an Owner a quick diagnostic tool from IRC.
Examples:
m openai test
m openai ping
m openai test Reply with exactly OK
It reports:
If the primary model fails and a fallback model is configured, the test command also tries the fallback and reports whether it was used.
The new openai models command lists models visible to the configured API key.
Examples:
m openai models
m openai models gpt
m openai models gpt-5
m openai models gpt-4o
This makes model debugging much easier. Instead of guessing whether a model is available, the bot can ask the API project directly and show a filtered list in IRC notices.
Owner users can now apply predefined OpenAI/tellme profiles:
m openai profiles
m openai profile dev
m openai profile compact
m openai profile safe
m openai profile default
Profiles provide quick presets:
dev : richer development answers
compact : shorter answers and less IRC noise
safe : conservative public-channel output
default : built-in defaults
This avoids typing several openai set ... commands every time the bot needs to switch between development, public-channel, or compact output behavior.
The system prompt is now configurable:
m openai explain system_prompt
m openai set system_prompt You answer briefly, clearly, and safely for IRC.
m openai reset system_prompt
The status command does not print the full prompt. It only reports system_prompt_len, so long prompts do not spam notices and sensitive or experimental prompt text is not dumped casually.
The internal help system was improved with search and level filters.
New commands:
m help search radio
m help search channel
m help search timer
m help level public
m help level admin
m help level owner
m help level master
This makes help much more usable as the command list grows.
The internal help parser was also made duplicate-safe, so obsolete duplicate command rows no longer silently override newer help entries.
The userinfo command was hardened.
It still reports whether a user has a password set, but it no longer selects or loads the password value itself from the database.
Instead of selecting:
USER.password
it now selects a boolean:
CASE
WHEN USER.password IS NOT NULL AND USER.password <> '' THEN 1
ELSE 0
END AS has_password
The IRC behavior remains useful, but the code handles less sensitive data.
That is a cleaner and safer design.
The birthday next command now uses a real rolling 30-day window.
Previously, it relied on a rough MM-DD string comparison, which could behave badly around year boundaries and did not truly enforce the next 30 days.
It now calculates real days ahead, handles year wrap properly, sorts by upcoming date, and reports output like:
Upcoming birthdays in the next 30 days:
Gwen : 05-20 (in 13d)
Teuk : 05-27 (in 20d)
Birthday date validation was also improved, so impossible dates such as 31/02 are rejected.
The seen command now respects the optional channel argument more consistently.
When using:
m seen Nick #channel
the online shortcut is scoped to that channel.
Before this change, the bot could report that a nick was currently online on another channel even though a specific channel was requested.
Now the behavior is more coherent:
m seen Teuk
m seen Teuk #teuk
m seen Teuk #boulets
The command also reports the actual online nick casing instead of always using the lower-cased lookup key.
This pass added or updated regression coverage around:
userinfo;seen.This update makes the bot feel less like a fragile script and more like an operable IRC service.
The admin side gains better diagnostics and live-safe configuration.
The public/user side gains better help, safer user info, more accurate birthday handling, and more coherent seen behavior.
In short: fewer dark corners, fewer cursed edge cases, and a few more useful spells in the book.
cd /home/mediabot/mediabot_v3 || exit 1
find Mediabot -name '*.pm' -print0 | xargs -0 -n1 runuser -u mediabot -- perl -I. -c
runuser -u mediabot -- perl -c mediabot.pl
runuser -u mediabot -- perl t/test_commands.pl --filter '179|180|181|182|183|184|185|186|187|188|189|190'
m openai help
m openai status
m openai test
m openai models gpt
m openai profiles
m openai profile compact
m help search channel
m help level owner
m birthday next
m seen Te[u]K #teuk
m userinfo teuk
Expected behavior:
You must be logged in to reply.