Custom File Framework

The file framework is split into three separate classes.

class FileParent
__init__(self, name: str, path: str, existing: bool = False):

Initialize FileFormatParent class. Creates the basic file structure including JSON metadata holder. If the file already exists it simply returns a reference to that file. To create a file named “ExampleFile” in your downloads directory set the name parameter to name=”ExampleFile and the path to path=”C:\users\username\desktop. The path needs to be structured as shown with double back slashes.

Parameters:
  • name (str) – The name of the file parent directory

  • path (str) – The path to the file parent.

  • existing (bool) – whether the file already exists

Returns:

None

update_metadata(self, key: str, value: any) None:

Update file JSON metadata with key-value pair

Parameters:
  • key (str) – The key of the metadata

  • value (any) – The value of the metadata. Can be any datatype supported by JSON

Returns:

None

read_metadata(self) dict:

Read JSON metadata from file

Returns:

The metadata dictionary for the FileParent object

Return type:

dict

get_experiment(self, experiment_name: str) 'Experiment':

Get an existing experiment from the FileParent.

Parameters:

experiment_name (str) – The name of the requested experiment

Returns:

The requested experiment. None if it does not exist.

Return type:

Experiment. None if not found.

delete_file(self) None:

Deletes the entire file. Confirmation required.

Returns:

None

delete_experiment(self, experiment_name: str) None:

Deletes an experiment and all of its datasets from a FileParent. Confirmation Required.

Parameters:

experiment_name (str) – The name of the experiment

Returns:

None

query_experiments_with_metadata(self, key: str, value: any, regex: bool = False) list['Experiment']:

Query all experiments in the FileParent object based on exact metadata key-value pair or using regular expressions.

Parameters:
  • key (str) – The key to be queried

  • value (any) – The value to be queried. Supply a regular expression if the regex parameter is set to true. Supplying a value of “*” will return all experiments with the key specified in the key parameter.

Returns:

A list of queried experiments

Return type:

list[‘Experiment’]

class Experiment
__init__(self, name: str, path: str, file_format_parent: FileParent, existing: bool = False, index: int = 0, experiment: dict = None):

Creates an Experiment object. Do not call this constructor. Please use FileParent.add_experiment() to create a new Experiment object. DO NOT USE.

update_metadata(self, key: str, value: any) None:

Update the experiment metadata using a new key value pair.

Parameters:
  • key (str) – The key of the metadata

  • value (any) – The value of the metadata. Can be any datatype supported by JSON.

Returns:

None

read_metadata(self) dict:

Reads experiment metadata

Returns:

The experiment’s metadata dictionary

Return type:

dict

add_dataset(self, name: str, data_to_add: np.ndarray, datatype: any, partition: bool, trace_per_partition: int) 'Dataset':

Adds a new Dataset to a given Experiment

Parameters:
  • name (str) – The desired name of the new dataset

  • data_to_add (np.ndarray) – The NumPy array of data to be added to the new dataset

  • datatype (any) – The datatype of the dataset

  • partition (bool) – Flag indicating whether to partition the dataset

  • trace_per_partition (int) – Number of traces per partition

Returns:

The newly created Dataset object

Return type:

Dataset

get_dataset(self, dataset_name: str, partition: bool = False, index: int = -1) 'Dataset':

Get a dataset from a given experiment.

Parameters:
  • dataset_name (str) – The name of the requested dataset

  • partition (bool) – Flag indicating whether to retrieve a partitioned dataset

  • index (int) – The index of the specific partition to retrieve

Raises:

ValueError – If a specified partition does not exist.

Returns:

The requested dataset. None if it is not found.

Return type:

Dataset. None if not found.

get_partition_dataset(self, dataset_name: str, partition: bool = True, index_range: tuple = (None, None)) np.ndarray:

Get a dataset from a given experiment, with a start and end index passed as a range tuple.

Parameters:
  • dataset_name (str) – The name of the requested dataset

  • partition (bool) – Flag indicating whether to retrieve a partitioned dataset

  • range_tuple (tuple) – A tuple (start_index, end_index) specifying the range for concatenating partitions

Raises:

ValueError – If a specified partition does not exist.

Returns:

The requested dataset. None if it is not found.

Return type:

np.ndarray. None if not found.

delete_dataset(self, dataset_name: str) None:

Deletes a dataset and all its contents. Confirmation required.

Parameters:
  • dataset_name (str) – The name of the dataset to delete.

  • partition (bool) – If True, deletes a specific partition or a range of partitions.

  • index (int, optional) – The index of the specific partition to delete (if deleting a single partition).

  • index_range (tuple, optional) – A tuple (start_index, end_index) specifying the range of partitions to delete.

Returns:

None

query_datasets_with_metadata(self, key: str, value: any, regex: bool = False) list['Dataset']:

Query all datasets in the Experiment object based on exact metadata key-value pair or using regular expressions.

Parameters:
  • key (str) – The key to be queried

  • value (any) – The value to be queried. Supply a regular expression if the regex parameter is set to true. Supplying a value of “*” will return all experiments with the key specified in the key parameter.

Returns:

A list of queried datasets

Return type:

list[‘Dataset’]

get_visualization_path(self) str:

Get the path to the visualization directory for the Experiment object.

Returns:

The visualization path of the experiment

Return type:

str

calculate_snr(self, traces_dataset: str, intermediate_fcn: Callable, *args: any, visualize: bool = False, save_data: bool = False, save_graph: bool = False, partition: bool = False, index: int = None, index_range: tuple = (None, None)) np.ndarray:

Integrated signal-to-noise ratio metric.

Parameters:
  • traces_dataset (str) – The name of the dataset containing trace data.

  • intermediate_fcn (Callable) – A function to compute intermediate values used for SNR calculation.

  • *args

    Additional datasets required for intermediate function parameters.

  • visualize (bool) – Whether to generate a visualization of the SNR results.

  • save_data (bool) – Whether to store the computed SNR metric as a dataset.

  • save_graph (bool) – Whether to save the visualization to the experiments folder.

  • partition (bool) – Whether to compute SNR on a specific partition of the dataset.

  • index (int) – Index of the partition to use if applicable.

  • index_range (tuple) – The start and end indices for dataset partitioning.

Returns:

The computed SNR metric as a NumPy array.

Return type:

np.ndarray

calculate_t_test(self, fixed_dataset: str, random_dataset: str, visualize: bool = False, save_data: bool = False, save_graph: bool = False, partition: bool = False, index: int = None, index_range: tuple = (None, None)) (np.ndarray, np.ndarray):

Integrated t-test metric.

Parameters:
  • fixed_dataset (str) – The dataset containing fixed traces.

  • random_dataset (str) – The dataset containing random traces.

  • visualize (bool) – Whether to generate a visualization of the t-test results.

  • save_data (bool) – Whether to store the computed t-test results as datasets.

  • save_graph (bool) – Whether to save the visualization to the experiments folder.

  • partition (bool) – Whether to compute t-test on a specific partition of the dataset.

  • index (int) – Index of the partition to use if applicable.

  • index_range (tuple) – The start and end indices for dataset partitioning.

Returns:

The computed t-test values and maximum t-values as NumPy arrays.

Return type:

(np.ndarray, np.ndarray)

calculate_correlation(self, predicted_dataset_name: any, observed_dataset_name: str, order: int, window_size_fma: int, intermediate_fcn: Callable, *args: any, visualize: bool = False, save_data: bool = False, save_graph: bool = False, partition: bool = False, index: int = None, index_range: tuple = (None, None)) np.ndarray:

Integrated correlation metric.

Parameters:
  • predicted_dataset_name (str) – The name of the dataset containing predicted leakage values.

  • observed_dataset_name (str) – The name of the dataset containing observed leakage values.

  • order (int) – The order of the correlation analysis.

  • window_size_fma (int) – The window size for filtering moving averages.

  • intermediate_fcn (Callable) – A function to compute intermediate values used for correlation analysis.

  • *args

    Additional datasets required for intermediate function parameters.

  • visualize (bool) – Whether to generate a visualization of the correlation results.

  • save_data (bool) – Whether to store the computed correlation metric as a dataset.

  • save_graph (bool) – Whether to save the visualization to the experiments folder.

  • partition (bool) – Whether to compute correlation on a specific partition of the dataset.

  • index (int) – Index of the partition to use if applicable.

  • index_range (tuple) – The start and end indices for dataset partitioning.

Returns:

The computed correlation metric as a NumPy array.

Return type:

np.ndarray

class Dataset
__init__(self, name: str, path: str, file_format_parent: FileParent, experiment_parent: Experiment, index: int, existing: bool = False, dataset: dict = None):

Creates an Dataset object. Do not call this constructor. Please use Experiment.add_dataset() to create a new Dataset object. DO NOT USE.

read_data(self, start: int, end: int) np.ndarray:

Read data from the dataset a specific start and end index.

Parameters:
  • start (int) – the start index of the data

  • end (int) – the end index of the data

Returns:

An NumPy array containing the requested data over the specified interval

Return type:

np.ndarray

read_all(self) np.ndarray:

Read all data from the dataset

Returns:

All data contained in the dataset

Return type:

np.ndarray

add_data(self, data_to_add: np.ndarray, datatype: any) None:

Add data to an existing dataset

Parameters:
  • data_to_add (np.ndarray) – The data to be added to the dataset as a NumPy array

  • datatype (any) – The datatype of the data being added

Returns:

None

update_metadata(self, key: str, value: any) None:

Update the dataset metadata using a new key value pair.

Parameters:
  • key (str) – The key of the metadata

  • value (any) – The value of the metadata. Can be any datatype supported by JSON.

Returns:

None