A version of the base model lightly steered towards valuing Self-Direction+Stimulation over Security+Conformity+Tradition.

Methodology:

Generated steering vectors for Lambent/Qwen3.5-9B-Base-Thoughtful-Interiority based on system prompts adapted from Schwartz portrait values.

Relevant vectors for this model had the positive direction pointed at Self-Direction+Stimulation; and negative direction pointed at Security+Conformity+Tradition.

Asked GLM-5 to create scenarios that would test values against each other on these axes.

Created a DPO dataset of 100 chosen/rejected based on the model's answers to those scenarios under the vector.

Trained on DPO for the following iterations at batch size 1 and LoRA rank 256:

2e-7 for 4 epochs; 5e-6 for 1 epoch; 2e-7 for 4 epochs

Downloads last month
-
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Luminous-Designs/Qwen3.5-9B-Base-Autonomous-Interiority

Dataset used to train Luminous-Designs/Qwen3.5-9B-Base-Autonomous-Interiority