3.4 KiB
3.4 KiB
Temperature Parameter Fix for GPT-5 Mini
Problem
GPT-5 mini model does not support the temperature parameter when set to 0.0 or any non-default value. It only supports the default temperature value (1). This caused the following error:
Error code: 400 - {'error': {'message': "Unsupported value: 'temperature' does not support 0.0 with this model. Only the default (1) value is supported.", 'type': 'invalid_request_error', 'param': 'temperature', 'code': 'unsupported_value'}}
Root Cause
The system was always passing a temperature parameter to the LLM, even when it was commented out in the configuration file. This happened because:
LLMParametersConfighad a default value oftemperature: float = 0LLMRagConfighad a default value oftemperature: float = 0.2- The LLM client always passed temperature to the model constructor
Solution
Modified the code to only pass the temperature parameter when it's explicitly set in the configuration:
1. Changed Configuration Classes
File: service/config.py
LLMParametersConfig.temperature: Changed fromfloat = 0toOptional[float] = NoneLLMRagConfig.temperature: Changed fromfloat = 0.2toOptional[float] = None
2. Updated Configuration Loading
File: service/config.py - get_llm_config() method
- Only include
temperaturein the config dict when it's explicitly set (not None) - Added proper null checks for both new and legacy configuration formats
3. Modified LLM Client Construction
File: service/llm_client.py - _create_llm() method
- Changed to only pass
temperatureparameter when it exists in the config - Removed hardcoded fallback temperature values
- Works for both OpenAI and Azure OpenAI providers
Behavior
Before Fix
- Temperature was always passed to the model (either 0, 0.2, or configured value)
- GPT-5 mini would reject requests with temperature != 1
After Fix
- When
temperatureis commented out or not set: Parameter is not passed to model (uses model default) - When
temperatureis explicitly set: Parameter is passed with the configured value - GPT-5 mini works correctly as it uses its default temperature when none is specified
Testing
Created comprehensive test script: scripts/test_temperature_fix.py
Test results show:
- ✅ When temperature not set: No temperature passed to model, API calls succeed
- ✅ When temperature set: Correct value passed to model
- ✅ API stability: Multiple consecutive calls work correctly
Configuration Examples
No Temperature (Uses Model Default)
# llm_prompt.yaml
parameters:
# temperature: 0 # Commented out
max_context_length: 100000
Explicit Temperature
# llm_prompt.yaml
parameters:
temperature: 0.7 # Will be passed to model
max_context_length: 100000
Backward Compatibility
- ✅ Existing configurations continue to work
- ✅ Legacy
config.yamlLLM configurations still supported - ✅ No breaking changes to API or behavior when temperature is explicitly set
Files Modified
-
service/config.pyLLMParametersConfig.temperature→Optional[float] = NoneLLMRagConfig.temperature→Optional[float] = Noneget_llm_config()→ Only include temperature when set
-
service/llm_client.py_create_llm()→ Only pass temperature when in config
-
scripts/test_temperature_fix.py(New)- Comprehensive test suite for temperature handling