# Documentation ## Docs - [6 Compare experiment results](https://docs.snowbell.ai/quickstart/compare_experiment_results.md): You can easily compare the results of any experiments using the “Compare Experiment” feature. - [1 Configure your API key](https://docs.snowbell.ai/quickstart/configure_your_api_key.md): First, set up your API by choosing how you want to connect. - [3.3 Add label for your test case](https://docs.snowbell.ai/quickstart/create_your_prompts/add_label_for_your_test_case.md): After adding your test case, you can add label to the test case if there is an expected output. The labeled test cases can be used for unit testing or evaluation purpose. Here is how:
- [3.2 Add test cases to query set from playground](https://docs.snowbell.ai/quickstart/create_your_prompts/add_test_cases_to_a_query_set.md): If you find interesting test cases in Prompt Playground, you can save them for future use by adding them to a query set. - [3.1 Create a prompt](https://docs.snowbell.ai/quickstart/create_your_prompts/create_a_prompt.md) - [3.4 Prompt testing](https://docs.snowbell.ai/quickstart/create_your_prompts/prompt_unit_testing.md): Save time and resources by quickly verifying your prompt's functionality with unit testing: - [5 Evaluate your models](https://docs.snowbell.ai/quickstart/evaluate_model.md): Evaluate model by running experiment & comparing metrics. - [2 Organize your work with projects](https://docs.snowbell.ai/quickstart/organize_your_work_with_projects.md) - [4 Upload your query sets](https://docs.snowbell.ai/quickstart/upload_your_query_sets.md) ## OpenAPI Specs - [openapi](https://docs.snowbell.ai/api-reference/openapi.json)