Altergo Model Boilerplate - Setup Guide
This guide explains how to clone and use the Altergo Model Boilerplate to develop battery digital twin models for the Altergo Platform.
Prerequisites
Before you begin, ensure you have:
- Python 3.8+ installed on your system
- Git for version control
- Altergo Platform Access with valid API credentials
- Asset ID from your Altergo digital twin setup
1. Clone the Repository
git clone <repository-url>
cd model_boilerplate
Replace <repository-url> with the actual repository URL provided by your team.
2. Install Dependencies
The boilerplate uses the Altergo SDK and several scientific computing libraries:
pip install -r requirements.txt
This will install:
altergo-sdk(from Altergo's private repository)numpy>= 1.20.0 (numerical computing)scipy>= 1.7.0 (scientific computing)pandas>= 1.3.0 (data manipulation)plotly>= 5.0.0 (visualizations for debug mode)
3. Configure Development Environment
Create Dev Parameters File
Copy the template configuration file:
cp template.dev-parameters.json dev-parameters.json
Edit dev-parameters.json with your specific Altergo credentials:
{
"altergoUserApiKey": "YOUR_API_KEY_HERE",
"altergoFactoryApi": "https://YOUR_COMPANY.altergo.io",
"altergoIotApi": "https://iot.YOUR_COMPANY.altergo.io",
"assetId": "YOUR_ASSET_ID_HERE"
}
Important:
- Replace
YOUR_API_KEY_HEREwith your actual Altergo API key - Replace
YOUR_COMPANYwith your company's Altergo subdomain - Replace
YOUR_ASSET_ID_HEREwith the ID of the digital twin asset you want to analyze
Configure Model Settings
The main model configuration is in altergo-settings.json. Key settings include:
{
"parameters": {
"execution": {
"enabled_models": "eq_cycles,adv_eq_cycles",
"compute_type": "manual",
"max_days_period_compute": 1,
"debug_mode": true,
"upload_output": false
},
"models": {
"eq_cycles": {
"inputs": {
"current": {"default": "Current"},
"capacity": {"default": "Capacity"}
},
"configuration": {
"charge_efficiency": 0.98,
"discharge_efficiency": 0.99
}
}
}
}
}
Input/Output Management
The framework uses a two-file system to manage data mapping between models and the Altergo platform:
Model Definition (models/{model_name}/model.json)
Defines the logical interface of your model. Each input and output must specify the following properties:
{
"inputs": {
"current": {
"unit": "A",
"type": "pd.Series",
"description": "Battery current measurement",
"required": true
},
"capacity": {
"unit": "Ah",
"type": "float",
"description": "Battery nominal capacity parameter",
"required": true
},
"initial_equivalent_cycles": {
"unit": "cycles",
"type": "float",
"initialisation_value": "equivalent_cycles",
"description": "Initial equivalent cycle count",
"required": false
},
"OCV_table": {
"unit": "V",
"type": "dict",
"description": "OCV lookup table",
"required": false
}
},
"outputs": {
"equivalent_cycles": {
"unit": "cycles",
"type": "pd.Series",
"description": "Calculated equivalent charge/discharge cycles"
},
"test_dataset": {
"unit": "-",
"type": "df",
"description": "Dataset output example"
}
}
}
Required Properties for Inputs and Outputs
Each input and output in model.json must define these mandatory properties:
For Inputs:
- Key (property name): The logical name used in your model code (e.g.,
"current","capacity") unit(string): Physical unit of measurement (e.g.,"A","Ah","V","-"for dimensionless)type(string): Data type - one of"pd.Series","float","dict","df"description(string): Human-readable explanation of what this input representsrequired(boolean): Whether this input is mandatory for model execution
For Outputs:
- Key (property name): The logical name returned by your model code (e.g.,
"equivalent_cycles") unit(string): Physical unit of the calculated resulttype(string): Data type of the output - one of"pd.Series","dict","df"description(string): Human-readable explanation of what this output represents
Optional Properties:
initialisation_value(inputs only): References another output to use as initial value for incremental processing
Example Breakdown:
"current": { // Key: logical name
"unit": "A", // Unit: amperes
"type": "pd.Series", // Type: time series data
"description": "Battery current measurement", // Description: what it is
"required": true // Required: mandatory input
}
Data Mapping (altergo-settings.json)
Maps logical model inputs/outputs to actual Altergo platform resources:
{
"parameters": {
"models": {
"eq_cycles": {
"inputs": {
"current": {
"default": "Model A/Current"
},
"capacity": {
"default": "Capacity"
},
"initial_equivalent_cycles": {
"default": "Model A/Eq Cycles"
},
"OCV_table": {
"default": "BP/LFP_config"
}
},
"outputs": {
"equivalent_cycles": {
"default": "Model A/Eq Cycles"
},
"test_dataset": {
"default": "My Dataset"
}
}
}
}
}
}
Data Type Mapping Rules
The framework automatically handles data retrieval based on input/output types:
Input Types:
-
pd.Series: Time series sensor data- Mapping:
"Sensor Group/Sensor Name"(path to sensor in blueprint) - Example:
"Model A/Current"→ fetches current sensor data
- Mapping:
-
float: Scalar parameter values- Standard:
"Parameter Name"(blueprint parameter name) - Initialization: If
initialisation_valueis specified, gets the first value from the specified output - Example:
"Capacity"→ fetches capacity parameter
- Standard:
-
dict: JSON dataset- Blueprint:
"BP/dataset_name"→ fetches from blueprint datasets - Digital Twin:
"DT/dataset_name"→ fetches from asset-specific datasets - Example:
"BP/LFP_config"→ fetches LFP configuration JSON
- Blueprint:
-
df: CSV dataset- Blueprint:
"BP/dataset_name"→ fetches from blueprint datasets - Digital Twin:
"DT/dataset_name"→ fetches from asset-specific datasets - Example:
"My Dataset"→ fetches CSV dataset
- Blueprint:
Output Types:
-
pd.Series: Time series results → uploaded as sensor data- Mapping:
"Sensor Group/Sensor Name"(path to sensor in blueprint) - Example:
"Model A/Eq Cycles"→ uploads as sensor data
- Mapping:
-
df: Tabular results → uploaded as CSV dataset- Mapping:
"Dataset Name"(creates CSV dataset) - Example:
"My Dataset"→ generates CSV dataset named "My Dataset"
- Mapping:
-
dict: JSON results → uploaded as JSON dataset- Mapping:
"Dataset Name"(creates JSON dataset) - Example:
"Config Output"→ generates JSON dataset named "Config Output"
- Mapping:
Initialization Values
For inputs with "initialisation_value", the framework:
- Looks for existing output data from the same model
- Uses the first (most recent) value as the initial input
- Enables incremental processing by maintaining state between runs
Example:
"initial_equivalent_cycles": {
"type": "float",
"initialisation_value": "equivalent_cycles"
}
This gets the last calculated equivalent cycle count to continue from where the previous run ended.
Execution Configuration Parameters
The execution section in altergo-settings.json controls how models are executed. Here's a detailed explanation of each parameter:
Core Execution Parameters
{
"parameters": {
"execution": {
"enabled_models": "eq_cycles,adv_eq_cycles",
"compute_type": "incremental",
"max_days_period_compute": 7,
"debug_mode": true,
"upload_output": false,
"flush_output": false,
"manual_start_date": "",
"manual_end_date": ""
}
}
}
enabled_models (string, required)
Purpose: Specifies which models to execute during the run.
- Format: Comma-separated list of model names
- Examples:
"eq_cycles"- Run only the equivalent cycles model"eq_cycles,adv_eq_cycles"- Run both models""- If empty, all available models in themodels/directory will be executed
- Note: Model names must match the directory names in
models/
compute_type (string, default: "incremental")
Purpose: Determines the data processing strategy and time range calculation.
-
"incremental"(Recommended for production):- Processes only new data since the last model execution
- Automatically detects the last output timestamp and starts from there
- Most efficient for regular executions
- Maintains continuity between runs
-
"manual"(For specific analysis):- Processes data within manually specified date ranges
- Requires
manual_start_dateandmanual_end_dateto be set - Useful for historical analysis or specific time periods
- Falls back to automatic calculation if dates are not provided
-
"full"(For complete reprocessing):- Reprocesses all available data from scratch
- Ignores previous outputs and starts fresh
- Most resource-intensive but ensures complete recalculation
- Useful after model logic changes or data corrections
max_days_period_compute (integer, default: 7)
Purpose: Limits the maximum time period that can be processed in a single execution.
- Range: 1-365 days (practical limits)
- Examples:
1- Process maximum 1 day of data7- Process maximum 1 week of data30- Process maximum 1 month of data
- Behavior: If the calculated time range exceeds this limit, the end date is adjusted to respect the maximum period
- Use Case: Prevents excessive resource usage and execution timeouts
debug_mode (boolean, default: false)
Purpose: Enables generation of interactive HTML debug dashboards.
true:- Generates detailed HTML reports for each model
- Includes input data visualization, parameter values, and output plots
- Creates files like
debug_eq_cycles.htmlin the execution directory - Useful for development, testing, and troubleshooting
false:- No debug outputs generated
- Faster execution and lower resource usage
- Recommended for production deployments
upload_output (boolean, default: true)
Purpose: Controls whether model results are uploaded to the Altergo platform.
true(Production mode):- Uploads all model outputs to the Altergo platform
- Results become available in dashboards and APIs
- Enables data persistence and sharing
false(Development mode):- Results are calculated but not uploaded
- Useful for local testing and development
- Prevents polluting production data with test results
flush_output (boolean, default: false)
Purpose: Controls whether existing output data is deleted before execution.
true(Clean slate):- Deletes all existing output data for the models being executed
- Ensures fresh start without historical data interference
- Useful when model logic has changed significantly
false(Preserve existing):- Keeps existing output data intact
- New results are added or updated as needed
- Recommended for normal incremental operations
manual_start_date (string, optional)
Purpose: Specifies the start date for manual compute type.
- Format: ISO 8601 datetime string (e.g.,
"2023-10-15T00:00:00Z") - Usage: Only effective when
compute_typeis set to"manual" - Example:
"2023-10-15T00:00:00Z"- Start processing from October 15, 2023 - Note: Must be paired with
manual_end_date
manual_end_date (string, optional)
Purpose: Specifies the end date for manual compute type.
- Format: ISO 8601 datetime string (e.g.,
"2023-10-16T23:59:59Z") - Usage: Only effective when
compute_typeis set to"manual" - Example:
"2023-10-16T23:59:59Z"- End processing at October 16, 2023 - Note: Must be paired with
manual_start_date
Configuration Examples
Development Configuration
{
"execution": {
"enabled_models": "eq_cycles",
"compute_type": "manual",
"max_days_period_compute": 1,
"debug_mode": true,
"upload_output": false,
"flush_output": false,
"manual_start_date": "2023-10-15T00:00:00Z",
"manual_end_date": "2023-10-15T23:59:59Z"
}
}
Production Configuration
{
"execution": {
"enabled_models": "eq_cycles,adv_eq_cycles",
"compute_type": "incremental",
"max_days_period_compute": 7,
"debug_mode": false,
"upload_output": true,
"flush_output": false
}
}
Historical Analysis Configuration
{
"execution": {
"enabled_models": "eq_cycles",
"compute_type": "full",
"max_days_period_compute": 30,
"debug_mode": true,
"upload_output": true,
"flush_output": true
}
}
Key Configuration Options Summary:
enabled_models: Comma-separated list of models to run (e.g., "eq_cycles", "adv_eq_cycles")compute_type:"manual"- Process specific date range (set manual_start_date/manual_end_date)"incremental"- Process new data since last run"full"- Reprocess all available data
debug_mode: Set totrueto generate HTML debug dashboardsupload_output: Set tofalsefor local testing,trueto upload results to platform
4. Test Your Setup
Run Local Test
Execute the models locally with your development configuration:
python entrypoint.py
The system will:
- Load configuration from
altergo-settings.json - Connect to Altergo APIs using credentials from
dev-parameters.json - Fetch data for the configured time period
- Execute enabled models
- Generate debug outputs (if enabled)
- Optionally upload results to the platform
Verify Output
If successful, you should see output similar to:
Starting Model Execution Framework
Loading configuration from altergo-settings.json
Connecting to Altergo APIs...
Fetching data for asset: 65e7487cad25e34679d71b66
Processing time range: 2023-10-16T00:18:04+08:00 to 2023-10-16T18:59:59+00:00
Executing model: eq_cycles
Model eq_cycles completed successfully
Generated debug dashboard: debug_eq_cycles.html
Execution completed
Debug Mode Output
When debug_mode is enabled, the framework generates interactive HTML dashboards for each model:
- Input Data Quality: Visualizes sensor data gaps and quality
- Model Parameters: Shows configuration values used
- Model Outputs: Interactive plots of calculated results
- Data Statistics: Summary statistics and validation checks
Open the generated HTML files in your browser to inspect the results.
5. Directory Structure
After setup, your project structure should look like:
model_boilerplate/
├── entrypoint.py # Main execution script
├── requirements.txt # Python dependencies
├── altergo-settings.json # Model configuration
├── dev-parameters.json # Local dev credentials (gitignored)
├── template.dev-parameters.json # Template for credentials
├── models/ # Model implementations
│ ├── __init__.py
│ ├── eq_cycles/ # Basic equivalent cycles model
│ │ ├── eq_cycles_model.py
│ │ ├── model.json
│ │ └── README.md
│ └── adv_eq_cycles/ # Advanced equivalent cycles
│ ├── adv_eq_cycles.py
│ ├── model.json
│ └── README.md
└── documentation/ # Project documentation
6. Common Issues and Troubleshooting
Authentication Errors
If you see authentication errors:
- Verify your API key is correct and active
- Check that the factory and IoT API URLs match your company's setup
- Ensure your account has access to the specified asset ID
Data Connection Issues
If models can't fetch data:
- Verify the asset ID exists and you have access
- Check that the specified date range contains data
- Ensure your network can reach the Altergo APIs
Missing Dependencies
If you see import errors:
- Ensure all requirements are installed:
pip install -r requirements.txt - Verify you have access to the private Altergo SDK repository
- Check Python version compatibility (3.8+)
Model Execution Errors
If models fail to execute:
- Check the logs for specific error messages
- Verify sensor mappings in
altergo-settings.jsonmatch your blueprint - Enable debug mode to see detailed data flow information
7. Next Steps
Once your setup is working:
- Explore Existing Models: Study the
eq_cyclesandadv_eq_cyclesimplementations to understand the pattern - Create Custom Models: Follow the model creation guide to build your own battery analysis models
- Configure for Your Data: Adjust sensor mappings and parameters for your specific battery setup
- Deploy to Production: Configure for automatic execution on the Altergo platform
8. Development Workflow
For ongoing development:
- Local Testing: Always test changes locally with
debug_mode: true - Version Control: Commit your changes to git (excluding
dev-parameters.json) - Model Validation: Verify outputs make sense using debug dashboards
- Platform Deployment: Push changes and update platform configuration
- Monitoring: Monitor execution logs and results on the Altergo platform
Entrypoint Options
The boilerplate provides two different entrypoint approaches to accommodate different use cases and levels of customization needed for your models.
Simple Entrypoint (entrypoint_simple.py)
When to use: When your models can fully utilize the boilerplate's automatic input/output capabilities without requiring custom data manipulation.
The simple entrypoint is designed for models that:
- Use standard input types (
pd.Series,float,dict,df) that can be automatically fetched and prepared by the boilerplate - Don't require custom data preprocessing or manipulation before model execution
- Don't need custom post-processing of results before upload
- Follow the standard Altergo platform data patterns
Complete Code:
"""
Model Execution Entrypoint
Main entry point for executing battery digital twin models.
"""
import sys
# Altergo SDK
from altergo_sdk.tools.utils import extract_altergo_parameters
# Boilerplate execution logic
from altergo_sdk.boiler_plate.boiler_plate import AltergoModelBoilerplate
# Import models to trigger registration
import models
def main():
"""Main execution function."""
print("Starting Model Execution Framework")
# Extract Altergo parameters from altergo-settings.json and return it into altergo_arguments
altergo_arguments = extract_altergo_parameters()
# Create boilerplate instance and execute the main logic
boilerplate = AltergoModelBoilerplate(altergo_arguments)
boilerplate.execute()
# You can add additional custom logic here if needed using model_instances output_data attribute
# You also have access to boilerplate.asset, boilerplate.client, boilerplate.results, etc.
if __name__ == "__main__":
main()
What happens in simple mode:
- Single call execution:
boilerplate.execute()runs all phases automatically - Automatic data preparation: All inputs defined in
model.jsonare automatically fetched based on mappings inaltergo-settings.json - Standard model execution: Models receive properly formatted data according to their input specifications
- Automatic output handling: Results are automatically uploaded based on output mappings
- Built-in error handling: Standard error handling and logging throughout the process
Advantages:
- Minimal code: Very clean and simple implementation
- Zero custom logic needed: Everything handled automatically
- Consistent execution: Follows standard patterns every time
- Easy maintenance: Less code to maintain and debug
Limitations:
- No custom preprocessing: Cannot modify inputs before model execution
- No custom postprocessing: Cannot manipulate results before upload
- Standard data flow only: Must work with boilerplate's automatic data handling
Advanced Entrypoint (entrypoint_advanced.py)
When to use: When you need to customize the data flow, add custom inputs, or perform additional processing between execution phases.
The advanced entrypoint is designed for scenarios where:
- Models require custom data preprocessing that the boilerplate cannot automatically handle
- You need to add computed parameters or derived inputs not available through standard mappings
- Custom validation or data quality checks are needed before model execution
- Results require post-processing or additional calculations before upload
- Integration with external data sources or APIs is required
- Complex initialization values need to be calculated from multiple sources
Complete Code:
"""
Model Execution Entrypoint
Main entry point for executing battery digital twin models.
"""
import sys
# Altergo SDK
from altergo_sdk.tools.utils import extract_altergo_parameters
# Boilerplate execution logic
from altergo_sdk.boiler_plate.boiler_plate import AltergoModelBoilerplate
# Import models to trigger registration
import models
def main():
"""Main execution function."""
print("Starting Model Execution Framework")
# Extract Altergo parameters from altergo-settings.json and return it into altergo_arguments
altergo_arguments = extract_altergo_parameters()
# Create boilerplate instance and execute the main logic
boilerplate = AltergoModelBoilerplate(altergo_arguments)
# Phase 1: Data Preparation
boilerplate.prepare_models_data()
# Here you can add custom inputs manipulation or additional data fetching if needed.
# boilerplate.models["my_model_name"].input_data['custom_key'] = 'custom_value'
# Phase 2: Model Execution
boilerplate.execute_models()
# Phase 3: Debug Dashboard Generation
if boilerplate.execution_config.get('debug_mode', False):
boilerplate.show_debug_dashboards()
# Phase 4: Output Management
if boilerplate.execution_config.get('upload_output', True):
boilerplate.upload_models_output()
print("\nModel execution completed successfully.")
# You can add additional custom logic here if needed using model_instances output_data attribute
# You also have access to boilerplate.asset, boilerplate.client, boilerplate.results, etc.
if __name__ == "__main__":
main()
Detailed Phase Breakdown:
Phase 1: Data Preparation (boilerplate.prepare_models_data())
This phase handles:
- Asset data retrieval from Altergo platform
- Sensor data fetching based on configured time ranges
- Dataset loading (JSON/CSV) from blueprint or digital twin sources
- Basic input validation and preprocessing
- Time series data interpolation and gap filling
After this phase, you can:
# Add custom computed inputs
for model_name, model in boilerplate.models_dict.items():
# Example: Add a derived parameter
if 'temperature' in model.input_data and 'ambient_temp' in model.input_data:
model.input_data['temp_difference'] = (
model.input_data['temperature'] - model.input_data['ambient_temp']
)
# Example: Add external data source
if model_name == 'advanced_thermal_model':
weather_data = fetch_weather_data(boilerplate.asset.location)
model.input_data['weather_conditions'] = weather_data
# Example: Custom validation
current_data = model.input_data.get('current')
if current_data is not None and len(current_data) < 100:
print(f"Warning: {model_name} has insufficient current data points")
model.input_data['data_quality_flag'] = 'insufficient_data'
Phase 2: Model Execution (boilerplate.execute_models_phase())
This phase:
- Validates all required inputs are present
- Executes each enabled model with its prepared input data
- Stores results in
model.output_datafor each model - Handles execution errors gracefully
After this phase, you can:
# Access and modify results before upload
for model_name, model in boilerplate.models_dict.items():
if model.output_data:
# Example: Apply post-processing filters
if 'equivalent_cycles' in model.output_data:
cycles = model.output_data['equivalent_cycles']
# Apply smoothing filter
model.output_data['equivalent_cycles_smoothed'] = apply_smoothing(cycles)
# Example: Add computed metrics
if 'voltage' in model.output_data and 'current' in model.output_data:
power = model.output_data['voltage'] * model.output_data['current']
model.output_data['power'] = power
# Example: Quality assessment
data_quality_score = assess_output_quality(model.output_data)
model.output_data['quality_score'] = data_quality_score
Phase 3: Debug Dashboard Generation (Optional)
Generates interactive HTML dashboards when debug mode is enabled.
Phase 4: Output Management (boilerplate.upload_models_output())
This phase:
- Uploads time series results as sensor data
- Creates/updates datasets for DataFrame and dictionary outputs
- Generates execution summary and task outputs
- Handles upload errors and provides feedback
Advanced Customization Examples:
Custom Data Integration
# After data preparation phase
for model_name, model in boilerplate.models_dict.items():
if model_name == 'predictive_maintenance_model':
# Add maintenance history from external system
maintenance_records = fetch_maintenance_history(boilerplate.asset.serial_number)
model.input_data['maintenance_history'] = maintenance_records
# Add fleet-wide statistics
fleet_stats = calculate_fleet_statistics(boilerplate.asset.blueprint.name)
model.input_data['fleet_benchmarks'] = fleet_stats
Custom Validation and Quality Control
# After data preparation phase
def validate_model_inputs(model):
"""Custom input validation logic."""
issues = []
# Check data completeness
for input_name, input_data in model.input_data.items():
if isinstance(input_data, pd.Series):
missing_ratio = input_data.isna().sum() / len(input_data)
if missing_ratio > 0.1: # More than 10% missing
issues.append(f"{input_name}: {missing_ratio:.1%} missing data")
# Check data ranges
if 'current' in model.input_data:
current = model.input_data['current']
if current.abs().max() > 1000: # Current over 1000A
issues.append("Current values exceed expected range")
return issues
# Apply validation
for model_name, model in boilerplate.models_dict.items():
validation_issues = validate_model_inputs(model)
if validation_issues:
print(f"Validation issues for {model_name}: {validation_issues}")
model.input_data['validation_issues'] = validation_issues
Custom Result Processing
# After model execution phase
def enhance_results(model_name, model):
"""Add custom metrics and analysis to model results."""
if 'equivalent_cycles' in model.output_data:
cycles = model.output_data['equivalent_cycles']
# Calculate degradation rate
if len(cycles) > 100:
degradation_rate = calculate_degradation_rate(cycles)
model.output_data['degradation_rate'] = degradation_rate
# Add statistical metrics
model.output_data['cycle_statistics'] = {
'total_cycles': cycles.iloc[-1] if len(cycles) > 0 else 0,
'daily_average': cycles.diff().mean() if len(cycles) > 1 else 0,
'peak_usage_day': cycles.diff().idxmax() if len(cycles) > 1 else None
}
# Apply enhancements
for model_name, model in boilerplate.models_dict.items():
if model.output_data:
enhance_results(model_name, model)
Choosing the Right Entrypoint
| Scenario | Recommended Entrypoint | Reason |
|---|---|---|
| Standard battery monitoring models | Simple | All required data available through standard mappings |
| Models using only blueprint sensors and parameters | Simple | Boilerplate handles everything automatically |
| Basic equivalent cycles, SOC estimation | Simple | Standard input/output patterns work perfectly |
| Models requiring external API data | Advanced | Need custom data fetching in Phase 1 |
| Complex initialization from multiple sources | Advanced | Need custom logic between data prep and execution |
| Models with custom validation requirements | Advanced | Need custom checks before/after execution |
| Results requiring post-processing | Advanced | Need access to modify outputs before upload |
| Integration with fleet-wide analytics | Advanced | Need custom data aggregation and analysis |
| Development and debugging complex models | Advanced | Better visibility and control over each phase |
Migration Between Entrypoints
You can easily switch between entrypoints:
From Simple to Advanced:
- Copy the advanced entrypoint structure
- Add your custom logic in the appropriate phases
- Test thoroughly to ensure data flow works correctly
From Advanced to Simple:
- Move any essential custom logic into the model implementation itself
- Ensure all required inputs are available through standard mappings
- Switch to the simple entrypoint for cleaner code
Both entrypoints provide full access to the boilerplate's capabilities - the difference is in the level of control and customization you need during execution.
Support
For questions or issues:
- Check the model creation guide for implementation details
- Review existing model examples in the
models/directory - Consult the documentation in the
documentation/folder - Contact your Altergo support team for platform-specific issues
Security Note: Never commit dev-parameters.json to version control as it contains sensitive API credentials. The file is already in .gitignore.