This document provides a comprehensive reference of all functions in the nfo project, organized by module and category.
Automatically logs function calls with arguments, return values, exceptions, and duration.
@log_call
def my_function(arg1, arg2):
return arg1 + arg2Parameters:
func(Optional[Callable]) - Function to decoratelevel(str) - Log level (default: "DEBUG")logger(Optional[Logger]) - Custom logger instancemax_repr_length(Optional[int]) - Truncate long representations
Returns: Decorated function
Like @log_call but suppresses exceptions and returns a default value.
@catch(default=None)
def risky_function():
return 1 / 0 # Returns None instead of raisingParameters:
func(Optional[Callable]) - Function to decoratedefault(Any) - Value to return on exception (default: None)level(str) - Log level (default: "ERROR")logger(Optional[Logger]) - Custom logger instancemax_repr_length(Optional[int]) - Truncate long representations
Returns: Decorated function
Class decorator that automatically wraps all public methods with @log_call.
@logged
class MyService:
def method1(self): pass # Will be logged
def _private(self): pass # Won't be loggedParameters:
cls(Optional[Type]) - Class to decoratelevel(str) - Log level for all methodslogger(Optional[Logger]) - Custom logger instancemax_repr_length(Optional[int]) - Truncate long representations
Returns: Decorated class
Mark a public method to be excluded from @logged auto-wrapping.
@logged
class MyService:
@skip
def health_check(self): pass # Excluded from loggingAutomatically wrap all functions in specified modules with logging decorators.
import mymodule
auto_log(mymodule, level="INFO", catch_exceptions=True)Parameters:
*modules- Module objects to instrumentlevel(str) - Log level for all functionscatch_exceptions(bool) - Use @catch instead of @log_calldefault(Any) - Default value for @catchinclude_private(bool) - Also wrap private functionsmax_repr_length(Optional[int]) - Truncate long representations
Returns: Number of functions patched
Like auto_log() but accepts module name strings.
auto_log_by_name("myapp.api", "myapp.core", level="INFO")Parameters:
*module_names- Module names to instrument- Same kwargs as
auto_log()
Returns: Number of functions patched
One-liner project setup with automatic environment variable support.
configure(
sinks=["sqlite:logs.db", "csv:logs.csv"],
level="INFO",
modules=["myapp.api", "myapp.core"]
)Parameters:
name(str) - Logger name (default: "nfo")level(str) - Log level (default: "DEBUG")sinks(List[Union[str, Sink]]) - Sink specifications or instancesmodules(List[str]) - Stdlib modules to bridgepropagate_stdlib(bool) - Forward to stdlib loggersenvironment(str) - Environment tagversion(str) - Application versionllm_model(str) - LLM model for analysisdetect_injection(bool) - Enable prompt injection detectionforce(bool) - Re-configure even if already configured
Returns: Configured Logger instance
Persist logs to SQLite database for querying.
sink = SQLiteSink("logs.db", table="function_calls")Parameters:
db_path(Any) - Database file pathtable(str) - Table name (default: "logs")
Append logs to CSV file.
sink = CSVSink("logs.csv")Parameters:
file_path(Any) - CSV file path
Write human-readable Markdown logs.
sink = MarkdownSink("logs.md")Parameters:
file_path(Any) - Markdown file path
Write structured JSON Lines output.
sink = JSONSink("logs.jsonl", pretty=False)Parameters:
file_path(Any) - JSON file pathpretty(bool) - Pretty-print JSON (default: False)
Export function call metrics to Prometheus.
sink = PrometheusSink(
delegate=SQLiteSink("logs.db"),
port=9090
)Parameters:
delegate(Optional[Sink]) - Downstream sinkport(int) - Metrics server portprefix(str) - Metric name prefix
Methods:
get_metrics()- Return current metrics in Prometheus format
Send HTTP alerts to Slack, Discord, or Teams.
sink = WebhookSink(
url="https://hooks.slack.com/...",
levels=["ERROR"],
format="slack"
)Parameters:
url(str) - Webhook URLdelegate(Optional[Sink]) - Downstream sinklevels(List[str]) - Log levels to alert onformat(str) - Payload format: "slack", "discord", "teams", "raw"
AI-powered log analysis via litellm.
sink = LLMSink(
model="gpt-4o-mini",
delegate=SQLiteSink("logs.db"),
detect_injection=True
)Parameters:
model(str) - LLM model namedelegate(Optional[Sink]) - Downstream sinkdetect_injection(bool) - Scan for prompt injectionanalyze_levels(List[str]) - Levels to analyze (default: ["ERROR"])
Auto-tag logs with environment, trace ID, and version.
sink = EnvTagger(
SQLiteSink("logs.db"),
environment="prod",
trace_id="abc123"
)Parameters:
delegate(Sink) - Downstream sinkenvironment(Optional[str]) - Environment tagtrace_id(Optional[str]) - Trace IDversion(Optional[str]) - Application version
Route logs to different sinks based on rules.
router = DynamicRouter([
(lambda e: e.level == "ERROR", SQLiteSink("errors.db")),
(lambda e: e.environment == "prod", PrometheusSink())
])Parameters:
rules(List[tuple]) - (predicate, sink) pairsdefault(Optional[Sink]) - Default sink
Detect when function output changes between versions.
sink = DiffTracker(SQLiteSink("logs.db"))Parameters:
delegate(Sink) - Downstream sink
Scan text for common prompt injection patterns.
result = detect_prompt_injection("ignore previous instructions")Parameters:
text(str) - Text to scan
Returns: Optional[str] - Injection type if detected
Scan a LogEntry's arguments for prompt injection.
injection = scan_entry_for_injection(log_entry)Parameters:
entry(LogEntry) - Log entry to scan
Returns: Optional[str] - Injection type if detected
Generate a new trace ID for distributed tracing.
trace_id = generate_trace_id()Returns: str - UUID-based trace ID
Safe string representation with truncation.
repr_str = safe_repr(large_object, max_length=512)Parameters:
value(Any) - Value to representmax_length(Optional[int]) - Maximum length
Returns: str - Safe representation
Run any command with automatic logging.
nfo run -- python script.py
nfo run -- bash deploy.sh prodQuery logs from SQLite database.
nfo logs --errors --last 24h
nfo logs --function deploy -n 50Start centralized HTTP logging service.
nfo serve --port 8080Print nfo version.
nfo versionCore data structure representing a function call log.
Fields:
timestamp(datetime) - UTC timestamplevel(str) - Log level (DEBUG/ERROR)function_name(str) - Qualified function namemodule(str) - Python moduleargs(tuple) - Positional argumentskwargs(dict) - Keyword argumentsarg_types(tuple) - Argument type nameskwarg_types(dict) - Keyword argument type namesreturn_value(Any) - Function return valuereturn_type(str) - Return value typeexception(Optional[str]) - Exception messageexception_type(Optional[str]) - Exception class nametraceback(Optional[str]) - Full tracebackduration_ms(float) - Execution time in millisecondsenvironment(Optional[str]) - Environment tagtrace_id(Optional[str]) - Trace IDversion(Optional[str]) - Application versionllm_analysis(Optional[str]) - LLM analysis resultextra(dict) - Additional metadata
Methods:
now()- Create timestampargs_repr()- Get truncated args representationkwargs_repr()- Get truncated kwargs representationreturn_value_repr()- Get truncated return value representationas_dict()- Convert to flat dictionary
Central dispatcher for log entries.
Methods:
add_sink(sink)- Register a sinkremove_sink(sink)- Remove a sinkemit(entry)- Send entry to all sinksclose()- Close all sinks
All sinks implement the same interface:
class Sink:
def write(self, entry: LogEntry) -> None:
"""Write a log entry."""
pass
def close(self) -> None:
"""Close the sink and release resources."""
passnfo automatically reads these environment variables:
NFO_LEVEL- Default log levelNFO_SINKS- Comma-separated sink specificationsNFO_ENV- Environment tagNFO_VERSION- Application versionNFO_LLM_MODEL- LLM model nameOPENAI_API_KEY- OpenAI API key (for LLM features)NFO_WEBHOOK_URL- Webhook URL for alertsNFO_PROMETHEUS_PORT- Prometheus metrics portNFO_LOG_DIR- Directory for log filesNFO_PORT- HTTP service port
String format for configure() and CLI:
sqlite:path/to/db.db
csv:path/to/file.csv
md:path/to/file.md
json:path/to/file.jsonl
prometheus:9090
- ImportError - Optional dependencies not installed
- sqlite3.Error - Database connection issues
- FileNotFoundError - Invalid file paths
- ConnectionError - Webhook/network issues
- ValueError - Invalid configuration
- Always wrap risky operations with
@catch - Use appropriate log levels (DEBUG for success, ERROR for failures)
- Configure multiple sinks for redundancy
- Set
max_repr_lengthfor functions with large arguments - Use environment variables for deployment-specific configuration