benchmarks sub-package#
API for Benchmarking
- class Benchmark(name: str, append: bool = True)#
Bases:
Project
Represents the benchmark runner.
- __init__(name: str, append: bool = True)#
Initializes a new Benchmark instance.
- Parameters:
name (str) – The name of the benchmark runner.
append (bool) – Indicates whether to allow adding runs to the
benchmark (existing)
- add_run(simulator: Simulator | None = None, input_dir: str | None = None, on: MachineGroup | ElasticMachineGroup | MPICluster | None = None, **kwargs) Self #
Adds a simulation run to the benchmark.
- Parameters:
simulator (Optional[Simulator]) – The simulator to be used for this run. If not provided, the previously set simulator will be used.
input_dir (Optional[str]) – The directory containing input files for the simulation. If not provided, the previously set input directory will be used.
on (Optional[types.ComputationalResources]) – The computational resources to run the simulation on. If not provided, the previously set resources will be used.
**kwargs – Additional keyword arguments for the simulator run. These will overwrite any previously set parameters.
- Returns:
The current instance for method chaining.
- Return type:
Self
- close()#
Close the project.
Calls to the get_current_project will return None after the project is closed. Consecutive calls to this method are idempotent.
- describe() str #
Generates a string description of the object
- Returns:
- A string description of the object. Includes project name,
total number of tasks and the number of tasks by status.
- Return type:
str
- download_outputs()#
Downloads all the outputs for all the tasks in the project.
All the files will be stored inside inductiva_output/<project_name>/<task_id’s>.
- export(fmt: ExportFormat | str = ExportFormat.JSON, filename: str | None = None, status: str | TaskStatusCode | None = None, select: SelectMode | str = SelectMode.DISTINCT)#
Exports the benchmark performance metrics in the specified format.
- Parameters:
fmt (Union[ExportFormat, str]) – The format to export the results in. Defaults to ExportFormat.JSON.
filename (Optional[str]) – The name of the output file to save the exported results. Defaults to the benchmark’s name if not provided.
status (Optional[Union[TaskStatusCode, str]]) – The status of the tasks to include in the benchmarking results. Defaults to None, which includes all tasks.
select (Union[SelectMode, str]) – The data to include in the benchmarking results. Defaults to SelectMode.DISTINCT that includes only the parameters that vary between different runs.
- get_info() ProjectInfo #
Get project information.
Get updated information on the project. This method executes a call to the backend to retrieve the most recent information about this project and stores it internally so that it becomes available through the Project.info property. This method is suitable when up-to-date information is required.
- get_tasks(last_n: int = -1, force_update=False, status: str | TaskStatusCode | None = None) List[Tasks] #
Get the the tasks of this project.
Get the tasks that belong to this project, eventually filtered by status. By default, it will return all the tasks, irrespectively of their status. This method will only do a request to the back end if the list of tasks is None (never requested the list of tasks) or if force_update is passed as True.
- Parameters:
last_n (int) – The number of tasks with repect to the submission time to fectch. If last_n<=0 we fetch all tasks submitted to the project.
force_update (bool) – Forces the request to the back end, even if we already have a list of tasks associated with this project.
status – Status of the tasks to get. If None, tasks with any status will be returned.
- property info: ProjectInfo#
Returns project info.
It makes no requests to the backend. Instead it uses previously stored information. Therefore it can be outdated. For a method that requests the most recent data from the backend use: get_info.
- list(last_n: int = -1, force_update=False, status: str | TaskStatusCode | None = None) List[Tasks] #
Get the the tasks of this project.
Get the tasks that belong to this project, eventually filtered by status. By default, it will return all the tasks, irrespectively of their status. This method will only do a request to the back end if the list of tasks is None (never requested the list of tasks) or if force_update is passed as True.
- Parameters:
last_n (int) – The number of tasks with repect to the submission time to fectch. If last_n<=0 we fetch all tasks submitted to the project.
force_update (bool) – Forces the request to the back end, even if we already have a list of tasks associated with this project.
status – Status of the tasks to get. If None, tasks with any status will be returned.
- property name: str#
- open()#
Open the project.
Open the project and make it the active one in the calling thread. An opened project will ensure that calls to the get_current_project function will return this project. Consecutive calls to this method are idempotent.
- Raises:
RuntimeError if another opened project exists, i.e., –
get_current_project –
- property opened: bool#
Checks if the project is open.
- run(num_repeats: int = 2, wait_for_quotas: bool = True) Self #
Executes all added runs.
Each run is executed the specified number of times, and the collection of runs is cleared afterwards.
- Parameters:
num_repeats (int) – The number of times to repeat each simulation run (default is 2).
wait_for_quotas (bool) – Indicates whether to wait for quotas to become available before starting each resource. If True, the program will actively wait in a loop, periodically sleeping and checking for quotas. If False, the program crashes if quotas are not available (default is True).
- Returns:
The current instance for method chaining.
- Return type:
Self
- runs_info(status: str | TaskStatusCode | None = None, select: SelectMode | str = SelectMode.DISTINCT) list #
Gathers the configuration and performance metrics for each run associated with the benchmark in a list, including computation cost and execution time.
- Parameters:
status (Optional[Union[TaskStatusCode, str]]) – The status of the tasks to include in the benchmarking results. Defaults to None, which includes all tasks.
select (Union[SelectMode, str]) – The data to include in the benchmarking results. Defaults to SelectMode.DISTINCT that includes only the parameters that vary between different runs.
- Returns:
- A list containing the configuration and performance
metrics for each run.
- Return type:
list
- set_default(simulator: Simulator | None = None, input_dir: str | None = None, on: MachineGroup | ElasticMachineGroup | MPICluster | None = None, **kwargs) Self #
Sets default parameters for the benchmark runner.
This method allows you to configure default settings for the benchmark, which will be used in subsequent runs unless explicitly overridden.
- Parameters:
simulator (Optional[Simulator]) – The simulator instance to be used as the default for future runs. If not provided, the current simulator will remain unchanged.
input_dir (Optional[str]) – The directory path for input files. If not provided, the current input directory will remain unchanged.
on (Optional[types.ComputationalResources]) – The computational resources to use for running the simulations. If not specified, the current resources will remain unchanged.
**kwargs – Additional keyword arguments to set as default parameters for the simulations. These will update any existing parameters with the same names.
- Returns:
The current instance for method chaining.
- Return type:
Self
- terminate() Self #
Terminates all active machine groups associated with the benchmark.
- Returns:
The current instance for method chaining.
- Return type:
Self
- wait() Self #
Waits for all running tasks to complete.
- Returns:
The current instance for method chaining.
- Return type:
Self