From 18b660266082590e34cce063777f1a08a7bea736 Mon Sep 17 00:00:00 2001 From: Xinrui Chen Date: Fri, 26 Sep 2025 17:08:29 +0800 Subject: [PATCH] [MindFormers] Fix contents architecture --- .../inference_precision_comparison.md | 2 +- .../docs/source_en/feature/evaluation.md | 272 ------------ .../guide/{benchmarks.md => evaluation.md} | 369 +++++++++++++--- docs/mindformers/docs/source_en/index.rst | 8 +- .../inference_precision_comparison.md | 2 +- .../docs/source_zh_cn/feature/evaluation.md | 272 ------------ .../docs/source_zh_cn/full-process_1.png | Bin 34378 -> 0 bytes .../docs/source_zh_cn/full-process_2.png | Bin 24522 -> 0 bytes .../docs/source_zh_cn/full-process_3.png | Bin 27339 -> 0 bytes .../docs/source_zh_cn/guide/deployment.md | 2 +- .../guide/{benchmarks.md => evaluation.md} | 413 ++++++++++++++---- .../docs/source_zh_cn/guide/inference.md | 2 +- .../docs/source_zh_cn/guide/llm_training.md | 2 +- .../docs/source_zh_cn/guide/pre_training.md | 2 +- .../guide/supervised_fine_tuning.md | 8 +- docs/mindformers/docs/source_zh_cn/index.rst | 46 +- 16 files changed, 677 insertions(+), 723 deletions(-) delete mode 100644 docs/mindformers/docs/source_en/feature/evaluation.md rename docs/mindformers/docs/source_en/guide/{benchmarks.md => evaluation.md} (44%) delete mode 100644 docs/mindformers/docs/source_zh_cn/feature/evaluation.md delete mode 100644 docs/mindformers/docs/source_zh_cn/full-process_1.png delete mode 100644 docs/mindformers/docs/source_zh_cn/full-process_2.png delete mode 100644 docs/mindformers/docs/source_zh_cn/full-process_3.png rename docs/mindformers/docs/source_zh_cn/guide/{benchmarks.md => evaluation.md} (38%) diff --git a/docs/mindformers/docs/source_en/advanced_development/inference_precision_comparison.md b/docs/mindformers/docs/source_en/advanced_development/inference_precision_comparison.md index e816f6e179..d86729bb64 100644 --- a/docs/mindformers/docs/source_en/advanced_development/inference_precision_comparison.md +++ b/docs/mindformers/docs/source_en/advanced_development/inference_precision_comparison.md @@ -24,7 +24,7 @@ For information on how the model performs online reasoning tasks, please refer t ### Dataset Evaluation After verification through online reasoning, the output of the benchmark of the model can remain basically consistent while keeping the input the same. However, the data volume is relatively small and the problem involved is not comprehensive enough in terms of domain. Therefore, the precision of the model needs to be ultimately verified through dataset evaluation. Only when the evaluation score of the dataset and the benchmark data can meet an error of 0.4% can it be proved that the precision of the model meets the acceptance criteria. -For information on how to evaluate the model using datasets, please refer to the [Evaluation Guide](https://www.mindspore.cn/mindformers/docs/en/master/guide/benchmarks.html). +For information on how to evaluate the model using datasets, please refer to the [Evaluation Guide](https://www.mindspore.cn/mindformers/docs/en/master/guide/evaluation.html). ## Positioning Precision Issue diff --git a/docs/mindformers/docs/source_en/feature/evaluation.md b/docs/mindformers/docs/source_en/feature/evaluation.md deleted file mode 100644 index c81958720f..0000000000 --- a/docs/mindformers/docs/source_en/feature/evaluation.md +++ /dev/null @@ -1,272 +0,0 @@ -# Evaluation - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_en/feature/evaluation.md) - -## Harness Evaluation - -### Introduction - -[LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) is an open-source language model evaluation framework that provides evaluation of more than 60 standard academic datasets, supports multiple evaluation modes such as HuggingFace model evaluation, PEFT adapter evaluation, and vLLM inference evaluation, and supports customized prompts and evaluation metrics, including the evaluation tasks of the loglikelihood, generate_until, and loglikelihood_rolling types. After MindSpore Transformers is adapted based on the Harness evaluation framework, the MindSpore Transformers model can be loaded for evaluation. - -The currently verified models and supported evaluation tasks are shown in the table below (the remaining models and evaluation tasks are actively being verified and adapted, please pay attention to version updates): - -| Verified models | Supported evaluation tasks | -|-----------------|------------------------------------------------| -| Llama3 | gsm8k, ceval-valid, mmlu, cmmlu, race, lambada | -| Llama3.1 | gsm8k, ceval-valid, mmlu, cmmlu, race, lambada | -| Qwen2 | gsm8k, ceval-valid, mmlu, cmmlu, race, lambada | - -### Installation - -Harness supports two installation methods: pip installation and source code compilation installation. Pip installation is simpler and faster, source code compilation and installation are easier to debug and analyze, and users can choose the appropriate installation method according to their needs. - -#### pip Installation - -Users can execute the following command to install Harness (It is recommended to use version 0.4.4): - -```shell -pip install lm_eval==0.4.4 -``` - -#### Source Code Compilation Installation - -Users can execute the following command to compile and install Harness: - -```bash -git clone --depth 1 -b v0.4.4 https://github.com/EleutherAI/lm-evaluation-harness -cd lm-evaluation-harness -pip install -e . -``` - -### Usage - -#### Preparations Before Evaluation - - 1. Create a new directory with e.g. the name `model_dir` for storing the model yaml files. - 2. Place the model inference yaml configuration file (predict_xxx_.yaml) in the directory created in the previous step. The directory location of the reasoning yaml configuration file for different models refers to [model library](../introduction/models.md). - 3. Configure the yaml file. If the model class, model Config class, and model Tokenizer class in yaml use cheat code, that is, the code files are in [research](https://gitee.com/mindspore/mindformers/tree/master/research) directory or other external directories, it is necessary to modify the yaml file: under the corresponding class `type` field, add the `auto_register` field in the format of `module.class`. (`module` is the file name of the script where the class is located, and `class` is the class name. If it already exists, there is no need to modify it.). - - Using [predict_llama3_1_8b. yaml](https://gitee.com/mindspore/mindformers/blob/master/research/llama3_1/llama3_1_8b/predict_llama3_1_8b.yaml) configuration as an example, modify some of the configuration items as follows: - - ```yaml - run_mode: 'predict' # Set inference mode - load_checkpoint: 'model.ckpt' # path of ckpt - processor: - tokenizer: - vocab_file: "tokenizer.model" # path of tokenizer - type: Llama3Tokenizer - auto_register: llama3_tokenizer.Llama3Tokenizer - ``` - - For detailed instructions on each configuration item, please refer to the [configuration description](../feature/configuration.md). - 4. If you use the `ceval-valid`, `mmlu`, `cmmlu`, `race`, and `lambada` datasets for evaluation, you need to set `use_flash_attention` to `False`. Using `predict_llama3_1_8b.yaml` as an example, modify the yaml as follows: - - ```yaml - model: - model_config: - # ... - use_flash_attention: False # Set to False - # ... - ``` - -#### Evaluation Example - -Execute the script of [run_harness.sh](https://gitee.com/mindspore/mindformers/blob/master/toolkit/benchmarks/run_harness.sh) to evaluate. - -The following table lists the parameters of the script of `run_harness.sh`: - -| Parameter | Type | Description | Required | -|---------------|------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------| -| `--register_path`| str | The absolute path of the directory where the cheat code is located. For example, the model directory under the [research](https://gitee.com/mindspore/mindformers/tree/master/research) directory. | No(The cheat code is required) | -| `--model` | str | The value must be `mf`, indicating the MindSpore Transformers evaluation policy. | Yes | -| `--model_args` | str | Model and evaluation parameters. For details, see MindSpore Transformers model parameters. | Yes | -| `--tasks` | str | Dataset name. Multiple datasets can be specified and separated by commas (,). | Yes | -| `--batch_size` | int | Number of batch processing samples. | No | - -The following table lists the parameters of `model_args`: - -| Parameter | Type | Description | Required | -|--------------|------|--------------------------------------------------------------------------|----------| -| `pretrained` | str | Model directory. | Yes | -| `max_length` | int | Maximum length of model generation. | No | -| `use_parallel` | bool | Enable parallel strategy (It must be enabled for multi card evaluation). | No | -| `tp` | int | The number of parallel tensors. | No | -| `dp` | int | The number of parallel data. | No | - -Harness evaluation supports single-device single-card, single-device multiple-card, and multiple-device multiple-card scenarios, with sample evaluations for each scenario listed below: - -1. Single Card Evaluation Example - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir \ - --tasks gsm8k - ``` - -2. Multi Card Evaluation Example - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=4,dp=1 \ - --tasks ceval-valid \ - --batch_size BATCH_SIZE WORKER_NUM - ``` - - - `BATCH_SIZE` is the sample size for batch processing of models; - - `WORKER_NUM` is the number of compute devices. - -3. Multi-Device and Multi-Card Example - - Node 0 (Master) Command: - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ - --tasks lambada \ - --batch_size 2 8 4 192.168.0.0 8118 0 output/msrun_log False 300 - ``` - - Node 1 (Secondary Node) Command: - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ - --tasks lambada \ - --batch_size 2 8 4 192.168.0.0 8118 1 output/msrun_log False 300 - ``` - - Node n (Nth Node) Command: - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ - --tasks lambada \ - --batch_size BATCH_SIZE WORKER_NUM LOCAL_WORKER MASTER_ADDR MASTER_PORT NODE_RANK output/msrun_log False CLUSTER_TIME_OUT - ``` - - - `BATCH_SIZE` is the sample size for batch processing of models; - - `WORKER_NUM` is the total number of compute devices used on all nodes; - - `LOCAL_WORKER` is the number of compute devices used on the current node; - - `MASTER_ADDR` is the IP address of the primary node to be started in distributed mode; - - `MASTER_PORT` is the Port number bound for distributed startup; - - `NODE_RANK` is the Rank ID of the current node; - - `CLUSTER_TIME_OUT` is the waiting time for distributed startup, in seconds. - - To execute the multi-node multi-device script for evaluating, you need to run the script on different nodes and set MASTER_ADDR to the IP address of the primary node. The IP address should be the same across all nodes, and only the NODE_RANK parameter varies across nodes. - -### Viewing the Evaluation Results - -After executing the evaluation command, the evaluation results will be printed out on the terminal. Taking gsm8k as an example, the evaluation results are as follows, where Filter corresponds to the way the matching model outputs results, n-shot corresponds to content format of dataset, Metric corresponds to the evaluation metric, Value corresponds to the evaluation score, and Stderr corresponds to the score error. - -| Tasks | Version | Filter | n-shot | Metric | | Value | | Stderr | -|-------|--------:|------------------|-------:|-------------|---|--------|---|--------| -| gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.5034 | ± | 0.0138 | -| | | strict-match | 5 | exact_match | ↑ | 0.5011 | ± | 0.0138 | - -### FAQ - -1. Use Harness for evaluation, when loading the HuggingFace datasets, report `SSLError`: - - Refer to [SSL Error reporting solution](https://stackoverflow.com/questions/71692354/facing-ssl-error-with-huggingface-pretrained-models). - - Note: Turning off SSL verification is risky and may be exposed to MITM. It is only recommended to use it in the test environment or in the connection you fully trust. - -## Evaluation after training - -### Overview - -After training, the model generally uses the trained model weights to run evaluation tasks to verify the training effect. This chapter introduces the necessary steps from training to evaluation, including: - -1. Processing of distributed weights after training (this step can be ignored for single-card training); -2. Writing inference configuration files for evaluation based on the training configuration; -3. Running a simple inference task to verify the correctness of the above steps; -4. Performing the evaluation task. - -Users can refer to this document to evaluate their trained models. - -### Distributed Weight Merging - -If the weights generated after training are distributed, the existing distributed weights need to be merged into complete weights first, and then the weights can be loaded through online slicing to complete the inference task. Using the [safetensors weight merging script](https://gitee.com/mindspore/mindformers/blob/master/toolkit/safetensors/unified_safetensors.py) provided by MindSpore Transformers, the merged weights are in the format of complete weights. - -Parameters can be filled in as follows: - -```shell -python toolkit/safetensors/unified_safetensors.py \ - --src_strategy_dirs src_strategy_path_or_dir \ - --mindspore_ckpt_dir mindspore_ckpt_dir\ - --output_dir output_dir \ - --file_suffix "1_1" \ - --filter_out_param_prefix "adam_" -``` - -Script parameter description: - -- src_strategy_dirs: The path to the distributed strategy file corresponding to the source weight, usually saved in the output/strategy/ directory by default after starting the training task. Distributed weights need to be filled in according to the following situations: - - 1. Source weights enable pipeline parallelism: Weight conversion is based on the merged strategy file, fill in the path of the distributed strategy folder. The script will automatically merge all ckpt_strategy_rank_x.ckpt files in the folder and generate merged_ckpt_strategy.ckpt in the folder. If merged_ckpt_strategy.ckpt already exists, you can directly fill in the path of this file. - 2. Source weights do not enable pipeline parallelism: Weight conversion can be based on any strategy file, just fill in the path of any ckpt_strategy_rank_x.ckpt file. - - Note: If merged_ckpt_strategy.ckpt already exists in the strategy folder and the folder path is still passed in, the script will first delete the old merged_ckpt_strategy.ckpt and then merge to generate a new merged_ckpt_strategy.ckpt for weight conversion. Therefore, please ensure that the folder has sufficient write permissions, otherwise the operation will report an error. - -- mindspore_ckpt_dir: Path to distributed weights, please fill in the path of the folder where the source weights are located. The source weights should be stored in the format model_dir/rank_x/xxx.safetensors, and fill in the folder path as model_dir. -- output_dir: Save path of target weights, the default value is `/new_llm_data/******/ckpt/nbg3_31b/tmp`, that is, the target weights will be placed in the `/new_llm_data/******/ckpt/nbg3_31b/tmp` directory. -- file_suffix: Naming suffix of target weight files, the default value is "1_1", that is, the target weights will be searched in the format *1_1.safetensors. -- has_redundancy: Whether the merged source weights are redundant weights, the default is True. -- filter_out_param_prefix: When merging weights, you can customize to filter out some parameters, and the filtering rules match by prefix name, such as optimizer parameters "adam_". -- max_process_num: Maximum number of processes for merging. Default value: 64. - -### Inference Configuration Development - -After completing the merging of weight files, you need to develop the corresponding inference configuration file based on the training configuration file. - -Taking Qwen3 as an example, modify the [Qwen3 training configuration](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/finetune_qwen3.yaml) based on the [Qwen3 inference configuration](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/predict_qwen3.yaml): - -Main modification points of Qwen3 training configuration include: - -- Modify the value of run_mode to "predict". -- Add pretrained_model_dir: Hugging Face or ModelScope model directory path, place model configuration, Tokenizer and other files. -- In parallel_config, only keep data_parallel and model_parallel. -- In model_config, only keep compute_dtype, layernorm_compute_dtype, softmax_compute_dtype, rotary_dtype, params_dtype, and keep the precision consistent with the inference configuration. -- In the parallel module, only keep parallel_mode and enable_alltoall, and modify the value of parallel_mode to "MANUAL_PARALLEL". - -### Inference Function Verification - -After the weights and configuration files are ready, use a single data input for inference to check whether the output content meets the expected logic. Refer to the [inference document](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_en/guide/inference.md) to start the inference task. - -For example: - -```shell -python run_mindformer.py \ ---config configs/qwen3/predict_qwen3.yaml \ ---run_mode predict \ ---use_parallel False \ ---predict_data '帮助我制定一份去上海的旅游攻略' -``` - -If the output content appears garbled or does not meet expectations, you need to locate the precision problem. - -1. Check the correctness of the model configuration - - Confirm that the model structure is consistent with the training configuration. Refer to the training configuration template usage tutorial to ensure that the configuration file complies with specifications and avoid inference exceptions caused by parameter errors. - -2. Verify the completeness of weight loading - - Check whether the model weight files are loaded completely, and ensure that the weight names strictly match the model structure. Refer to the new model weight conversion adaptation tutorial to view the weight log, that is, whether the weight slicing method is correct, to avoid inference errors caused by mismatched weights. - -3. Locate inference precision issues - - If the model configuration and weight loading are both correct, but the inference results still do not meet expectations, precision comparison analysis is required. Refer to the inference precision comparison document to compare the output differences between training and inference layer by layer, and troubleshoot potential data preprocessing, computational precision, or operator issues. - -### Evaluation using AISBench - -Refer to the AISBench evaluation section and use the AISBench tool for evaluation to verify model precision. \ No newline at end of file diff --git a/docs/mindformers/docs/source_en/guide/benchmarks.md b/docs/mindformers/docs/source_en/guide/evaluation.md similarity index 44% rename from docs/mindformers/docs/source_en/guide/benchmarks.md rename to docs/mindformers/docs/source_en/guide/evaluation.md index a70160efb4..5dd22cafd3 100644 --- a/docs/mindformers/docs/source_en/guide/benchmarks.md +++ b/docs/mindformers/docs/source_en/guide/evaluation.md @@ -1,10 +1,14 @@ -# Benchmark +# Evaluation -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_en/guide/benchmarks.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_en/guide/evaluation.md) ## Overview -The rapid development of Large Language Models (LLMs) has created a systematic need to evaluate their capabilities and limitations. Model evaluation has become essential infrastructure in the AI field. The mainstream model evaluation process is like an exam, where model capabilities are assessed through the accuracy rate of the model's answers to test papers (evaluation datasets). Common datasets such as CEVAL contain 52 different subject professional examination multiple-choice questions in Chinese, primarily evaluating the model's knowledge base. GSM8K consists of 8,501 high-quality elementary school math problems written by human problem setters, primarily evaluating the model's reasoning ability, and so on. Of course, due to the development of large model capabilities, these datasets all face issues of data contamination and saturation, which is only mentioned here for illustration. At the same time, many non-question-answering cutting-edge model evaluation methods have emerged in the industry, which are not within the scope of this tutorial. +The rapid development of Large Language Models (LLMs) has created a systematic need to evaluate their capabilities and limitations. Model evaluation has become essential infrastructure in the AI field. The mainstream model evaluation process is like an exam, where model capabilities are assessed through the accuracy rate of the model's answers to test papers (evaluation datasets). Common datasets such as CEVAL contain 52 different subject professional examination multiple-choice questions in Chinese, primarily evaluating the model's knowledge base. GSM8K consists of 8,501 high-quality elementary school math problems written by human problem setters, primarily evaluating the model's reasoning ability, and so on. + +In previous versions, MindSpore Transformers adapted the Harness evaluation framework for certain legacy architecture models. The latest version now supports the AISBench evaluation framework, meaning that in theory, any model supporting service-oriented deployment can be evaluated using AISBench. + +## AISBench Benchmarking For service-oriented evaluation of MindSpore Transformers, the AISBench Benchmark suite is recommended. AISBench Benchmark is a model evaluation tool built on OpenCompass, compatible with OpenCompass's configuration system, dataset structure, and model backend implementation, while extending support for service-oriented models. It supports 30+ open-source datasets: [Evaluation datasets supported by AISBench](https://gitee.com/aisbench/benchmark/blob/master/doc/users_guide/datasets.md#%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E9%9B%86). @@ -17,11 +21,11 @@ Both tasks follow the same evaluation paradigm. The user side sends requests and ![benchmark_illustrate](./images/benchmark_illustrate.png) -## Preparations +### Preparations The preparation phase mainly completes three tasks: installing the AISBench evaluation environment, downloading datasets, and starting the vLLM-MindSpore service. -### Step 1 Install AISBench Evaluation Environment +**Step 1 Install AISBench Evaluation Environment** Since AISBench has dependencies on both torch and transformers, but the official vLLM-MindSpore image contains a mocked torch implementation from the msadapter package which may cause conflicts, it is recommended to set up a separate container for installing the AISBench evaluation environment. If you insist on using the vLLM-MindSpore image to create a container for installing the evaluation environment, you need to perform the following steps to remove the existing torch and transformers packages inside the container after launching it: @@ -39,7 +43,7 @@ cd benchmark/ pip3 install -e ./ --use-pep517 ``` -### Step 2 Dataset Download +**Step 2 Dataset Download** The official documentation provides download links for each dataset. Taking CEVAL as an example, you can find the download link in the [CEVAL documentation,](https://gitee.com/aisbench/benchmark/blob/master/ais_bench/benchmark/configs/datasets/ceval/README.md), and execute the following commands to download and extract the dataset to the specified path: @@ -55,15 +59,15 @@ rm ceval-exam.zip For other dataset downloads, you can find download links in the corresponding dataset's official documentation. -### Step 3 Start vLLM-MindSpore Service +**Step 3 Start vLLM-MindSpore Service** For the specific startup process, see: [Service Deployment Tutorial](./deployment.md). Evaluation supports all service-deployable models. -## Accuracy Evaluation Process +### Accuracy Evaluation Process Accuracy evaluation first requires determining the evaluation interface and dataset type, which is specifically selected based on model capabilities and datasets. -### Step 1 Modify Interface Configuration +#### Step 1 Modify Interface Configuration AISBench supports OpenAI's v1/chat/completions and v1/completions interfaces, which correspond to different configuration files in AISBench. Taking the v1/completions interface as an example, referred to as the general interface, you need to modify the following file `ais_bench/benchmark/configs/models/vllm_api/vllm_api_general.py`configuration: @@ -96,7 +100,7 @@ models = [ For more specific parameter descriptions, refer to [Interface Configuration Parameter Description](#appendix-interface-configuration-parameter-description-table). -### Step 2 Start Evaluation via Command Line +#### Step 2 Start Evaluation via Command Line Determine the dataset task to be used. Taking CEVAL as an example, using the ceval_gen_5_shot_str dataset task, the command is as follows: @@ -104,7 +108,7 @@ Determine the dataset task to be used. Taking CEVAL as an example, using the cev ais_bench --models vllm_api_general --datasets ceval_gen_5_shot_str --debug ``` -#### Parameter Description +Parameter Description: - `--models`: Specifies the model task interface, i.e., vllm_api_general, corresponding to the file name changed in the previous step. There is also vllm_api_general_chat - `--datasets`: Specifies the dataset task, i.e., the ceval_gen_4_shot_str dataset task, where 4_shot means the question will be input repeatedly four times, and str means non-chat output @@ -113,11 +117,11 @@ For more parameter configuration descriptions, see [Configuration Description](h After the evaluation is completed, statistical results will be displayed on the screen. The specific execution results and logs will be saved in the outputs folder under the current path. In case of execution exceptions, problems can be located based on the logs. -## Performance Evaluation Process +### Performance Evaluation Process The performance evaluation process is similar to the accuracy evaluation process, but it pays more attention to the processing time of each stage of each request. By accurately recording the sending time of each request, the return time of each stage, and the response content, it systematically evaluates key performance indicators of the model service in actual deployment environments, such as response latency (such as TTFT, inter-token latency), throughput capacity (such as QPS, TPUT), and concurrent processing capabilities. The following uses the original GSM8K dataset for performance evaluation as an example. -### Step 1 Modify Interface Configuration +#### Step 1 Modify Interface Configuration By configuring service backend parameters, request content, request intervals, concurrent numbers, etc. can be flexibly controlled to adapt to different evaluation scenarios (such as low-concurrency latency-sensitive or high-concurrency throughput-prioritized). The configuration is similar to accuracy evaluation. Taking the vllm_api_stream_chat task as an example, modify the following configuration in `ais_bench/benchmark/configs/models/vllm_api/vllm_api_stream_chat.py`: @@ -151,13 +155,13 @@ models = [ For specific parameter descriptions, refer to [Interface Configuration Parameter Description](#appendix-interface-configuration-parameter-description-table) -### Step 2 Evaluation Command +#### Step 2 Evaluation Command ```bash ais_bench --models vllm_api_stream_chat --datasets gsm8k_gen_0_shot_cot_str_perf --summarizer default_perf --mode perf ``` -#### Parameter Description +Parameter Description: - `--models`: Specifies the model task interface, i.e., vllm_api_stream_chat corresponding to the file name of the configuration changed in the previous step. - `--datasets`: Specifies the dataset task, i.e., the gsm8k_gen_0_shot_cot_str_perf dataset task, with a corresponding task file of the same name, where gsm8k refers to the dataset used, 0_shot means the question will not be repeated, str means non-chat output, and perf means performance testing @@ -166,29 +170,29 @@ ais_bench --models vllm_api_stream_chat --datasets gsm8k_gen_0_shot_cot_str_perf For more parameter configuration descriptions, see [Configuration Description](https://gitee.com/aisbench/benchmark/blob/master/doc/users_guide/models.md#%E6%9C%8D%E5%8A%A1%E5%8C%96%E6%8E%A8%E7%90%86%E5%90%8E%E7%AB%AF). -### Evaluation Results Description +#### Evaluation Results Description After the evaluation is completed, performance evaluation results will be output, including single inference request performance output results and end-to-end performance output results. Parameter descriptions are as follows: -| Metric | Full Name | Description | -|-----------------------|-----------------------|-------------------------------------------| -| E2EL | End-to-End Latency | Total latency (ms) from request sending to receiving complete response | -| TTFT | Time To First Token | Latency (ms) for the first token to return | +| Metric | Full Name | Description | +|-----------------------|-----------------------|-------------------------------------------------------------------------------------------| +| E2EL | End-to-End Latency | Total latency (ms) from request sending to receiving complete response | +| TTFT | Time To First Token | Latency (ms) for the first token to return | | TPOT | Time Per Output Token | Average generation latency (ms) per token in the output phase (excluding the first token) | -| ITL | Inter-token Latency | Average interval latency (ms) between adjacent tokens (excluding the first token) | -| InputTokens | / | Number of input tokens in the request | -| OutputTokens | / | Number of output tokens generated by the request | -| OutputTokenThroughput | / | Throughput of output tokens (Token/s) | -| Tokenizer | / | Tokenizer encoding time (ms) | -| Detokenizer | / | Detokenizer decoding time (ms) | +| ITL | Inter-token Latency | Average interval latency (ms) between adjacent tokens (excluding the first token) | +| InputTokens | / | Number of input tokens in the request | +| OutputTokens | / | Number of output tokens generated by the request | +| OutputTokenThroughput | / | Throughput of output tokens (Token/s) | +| Tokenizer | / | Tokenizer encoding time (ms) | +| Detokenizer | / | Detokenizer decoding time (ms) | - For more evaluation tasks, such as synthetic random dataset evaluation and performance stress testing, see the following documentation: [AISBench Official Documentation](https://gitee.com/aisbench/benchmark/tree/master/doc/users_guide). - For more tips on optimizing inference performance, see the following documentation: [Inference Performance Optimization](https://docs.qq.com/doc/DZGhMSWFCenpQZWJR). - For more parameter descriptions, see the following documentation: [Performance Evaluation Results Description](https://gitee.com/aisbench/benchmark/blob/master/doc/users_guide/performance_metric.md). -## FAQ +### Appendix -### Q: Evaluation results output does not conform to format, how to make the results output conform to expectations? +**Q: Evaluation results output does not conform to format, how to make the results output conform to expectations?** In some datasets, we may want the model's output to conform to our expectations, so we can change the prompt. @@ -211,44 +215,44 @@ for _split in ['val']: For other datasets, similarly modify the template in the corresponding files to construct appropriate prompts. -### Q: How should interface types and inference lengths be configured for different datasets? +**Q: How should interface types and inference lengths be configured for different datasets?** This specifically depends on the comprehensive consideration of model type and dataset type. For reasoning class models, the chat interface is recommended as it can enable thinking, and the inference length should be set longer. For base models, the general interface is used. - Taking the Qwen2.5 model evaluating the MMLU dataset as an example: From the dataset perspective, MMLU datasets mainly test knowledge, so the general interface is recommended. At the same time, when selecting dataset tasks, do not choose cot, i.e., do not enable the chain of thought. - Taking the QWQ32B model evaluating difficult mathematical reasoning questions like AIME2025 as an example: Use the chat interface with ultra-long inference length and use datasets with cot tasks. -### Common Errors +#### Common Errors -#### 1.Client returns HTML data with garbled characters +1. Client returns HTML data with garbled characters -**Error phenomenon**: Return webpage HTML data -**Solution**: Check if the client has a proxy enabled, check proxy_https and proxy_http and turn off the proxy. + **Error phenomenon**: Return webpage HTML data + **Solution**: Check if the client has a proxy enabled, check proxy_https and proxy_http and turn off the proxy. -#### 2.Server reports 400 Bad Request +2. Server reports 400 Bad Request -**Error phenomenon**: + **Error phenomenon**: -```plaintext -INFO: 127.0.0.1:53456 - "POST /v1/completions HTTP/1.1" 400 Bad Request -INFO: 127.0.0.1:53470 - "POST /v1/completions HTTP/1.1" 400 Bad Request -``` + ```plaintext + INFO: 127.0.0.1:53456 - "POST /v1/completions HTTP/1.1" 400 Bad Request + INFO: 127.0.0.1:53470 - "POST /v1/completions HTTP/1.1" 400 Bad Request + ``` -**Solution**: Check if the request format is correct in the client interface configuration. + **Solution**: Check if the request format is correct in the client interface configuration. -#### 3.Server reports error 404 xxx does not exist +3. Server reports error 404 xxx does not exist -**Error phenomenon**: + **Error phenomenon**: -```plaintext -[serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' param=None code=404 -"POST /v1/chat/completions HTTP/1.1" 404 Not Found -[serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' -``` + ```plaintext + [serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' param=None code=404 + "POST /v1/chat/completions HTTP/1.1" 404 Not Found + [serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' + ``` -**Solution**: Check if the model path in the interface configuration is accessible. + **Solution**: Check if the model path in the interface configuration is accessible. -## Appendix: Interface Configuration Parameter Description Table +### Interface Configuration Parameter Description Table | Parameter | Description | |---------------------|----------------------------------------------------------------------| @@ -268,9 +272,272 @@ INFO: 127.0.0.1:53470 - "POST /v1/completions HTTP/1.1" 400 Bad Request | repetition_penalty | Post-processing parameter, repetition penalty | | ignore_eos | Inference service output ignores eos (output length will definitely reach max_out_len) | -## References +### References The above only introduces the basic usage of AISBench. For more tutorials and usage methods, please refer to the official materials: - [AISBench Official Tutorial](https://gitee.com/aisbench/benchmark) -- [AISBench Main Documentation](https://gitee.com/aisbench/benchmark/tree/master/doc/users_guide) \ No newline at end of file +- [AISBench Main Documentation](https://gitee.com/aisbench/benchmark/tree/master/doc/users_guide) + +## Harness Evaluation + +[LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) is an open-source language model evaluation framework that provides evaluation of more than 60 standard academic datasets, supports multiple evaluation modes such as HuggingFace model evaluation, PEFT adapter evaluation, and vLLM inference evaluation, and supports customized prompts and evaluation metrics, including the evaluation tasks of the loglikelihood, generate_until, and loglikelihood_rolling types. After MindSpore Transformers is adapted based on the Harness evaluation framework, the MindSpore Transformers model can be loaded for evaluation. + +The currently verified models and supported evaluation tasks are shown in the table below (the remaining models and evaluation tasks are actively being verified and adapted, please pay attention to version updates): + +| Verified models | Supported evaluation tasks | +|-----------------|------------------------------------------------| +| Llama3 | gsm8k, ceval-valid, mmlu, cmmlu, race, lambada | +| Llama3.1 | gsm8k, ceval-valid, mmlu, cmmlu, race, lambada | +| Qwen2 | gsm8k, ceval-valid, mmlu, cmmlu, race, lambada | + +### Installation + +Harness supports two installation methods: pip installation and source code compilation installation. Pip installation is simpler and faster, source code compilation and installation are easier to debug and analyze, and users can choose the appropriate installation method according to their needs. + +#### pip Installation + +Users can execute the following command to install Harness (It is recommended to use version 0.4.4): + +```shell +pip install lm_eval==0.4.4 +``` + +#### Source Code Compilation Installation + +Users can execute the following command to compile and install Harness: + +```bash +git clone --depth 1 -b v0.4.4 https://github.com/EleutherAI/lm-evaluation-harness +cd lm-evaluation-harness +pip install -e . +``` + +### Usage + +#### Preparations Before Evaluation + +1. Create a new directory with e.g. the name `model_dir` for storing the model yaml files. +2. Place the model inference yaml configuration file (predict_xxx_.yaml) in the directory created in the previous step. The directory location of the reasoning yaml configuration file for different models refers to [model library](../introduction/models.md). +3. Configure the yaml file. If the model class, model Config class, and model Tokenizer class in yaml use cheat code, that is, the code files are in [research](https://gitee.com/mindspore/mindformers/tree/master/research) directory or other external directories, it is necessary to modify the yaml file: under the corresponding class `type` field, add the `auto_register` field in the format of `module.class`. (`module` is the file name of the script where the class is located, and `class` is the class name. If it already exists, there is no need to modify it.). + + Using [predict_llama3_1_8b. yaml](https://gitee.com/mindspore/mindformers/blob/master/research/llama3_1/llama3_1_8b/predict_llama3_1_8b.yaml) configuration as an example, modify some of the configuration items as follows: + + ```yaml + run_mode: 'predict' # Set inference mode + load_checkpoint: 'model.ckpt' # path of ckpt + processor: + tokenizer: + vocab_file: "tokenizer.model" # path of tokenizer + type: Llama3Tokenizer + auto_register: llama3_tokenizer.Llama3Tokenizer + ``` + + For detailed instructions on each configuration item, please refer to the [configuration description](../feature/configuration.md). +4. If you use the `ceval-valid`, `mmlu`, `cmmlu`, `race`, and `lambada` datasets for evaluation, you need to set `use_flash_attention` to `False`. Using `predict_llama3_1_8b.yaml` as an example, modify the yaml as follows: + + ```yaml + model: + model_config: + # ... + use_flash_attention: False # Set to False + # ... + ``` + +#### Evaluation Example + +Execute the script of [run_harness.sh](https://gitee.com/mindspore/mindformers/blob/master/toolkit/benchmarks/run_harness.sh) to evaluate. + +The following table lists the parameters of the script of `run_harness.sh`: + +| Parameter | Type | Description | Required | +|-------------------|------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------| +| `--register_path` | str | The absolute path of the directory where the cheat code is located. For example, the model directory under the [research](https://gitee.com/mindspore/mindformers/tree/master/research) directory. | No(The cheat code is required) | +| `--model` | str | The value must be `mf`, indicating the MindSpore Transformers evaluation policy. | Yes | +| `--model_args` | str | Model and evaluation parameters. For details, see MindSpore Transformers model parameters. | Yes | +| `--tasks` | str | Dataset name. Multiple datasets can be specified and separated by commas (,). | Yes | +| `--batch_size` | int | Number of batch processing samples. | No | + +The following table lists the parameters of `model_args`: + +| Parameter | Type | Description | Required | +|----------------|------|--------------------------------------------------------------------------|----------| +| `pretrained` | str | Model directory. | Yes | +| `max_length` | int | Maximum length of model generation. | No | +| `use_parallel` | bool | Enable parallel strategy (It must be enabled for multi card evaluation). | No | +| `tp` | int | The number of parallel tensors. | No | +| `dp` | int | The number of parallel data. | No | + +Harness evaluation supports single-device single-card, single-device multiple-card, and multiple-device multiple-card scenarios, with sample evaluations for each scenario listed below: + +1. Single Card Evaluation Example + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir \ + --tasks gsm8k + ``` + +2. Multi Card Evaluation Example + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=4,dp=1 \ + --tasks ceval-valid \ + --batch_size BATCH_SIZE WORKER_NUM + ``` + + - `BATCH_SIZE` is the sample size for batch processing of models; + - `WORKER_NUM` is the number of compute devices. + +3. Multi-Device and Multi-Card Example + + Node 0 (Master) Command: + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ + --tasks lambada \ + --batch_size 2 8 4 192.168.0.0 8118 0 output/msrun_log False 300 + ``` + + Node 1 (Secondary Node) Command: + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ + --tasks lambada \ + --batch_size 2 8 4 192.168.0.0 8118 1 output/msrun_log False 300 + ``` + + Node n (Nth Node) Command: + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ + --tasks lambada \ + --batch_size BATCH_SIZE WORKER_NUM LOCAL_WORKER MASTER_ADDR MASTER_PORT NODE_RANK output/msrun_log False CLUSTER_TIME_OUT + ``` + + - `BATCH_SIZE` is the sample size for batch processing of models; + - `WORKER_NUM` is the total number of compute devices used on all nodes; + - `LOCAL_WORKER` is the number of compute devices used on the current node; + - `MASTER_ADDR` is the IP address of the primary node to be started in distributed mode; + - `MASTER_PORT` is the Port number bound for distributed startup; + - `NODE_RANK` is the Rank ID of the current node; + - `CLUSTER_TIME_OUT` is the waiting time for distributed startup, in seconds. + + To execute the multi-node multi-device script for evaluating, you need to run the script on different nodes and set MASTER_ADDR to the IP address of the primary node. The IP address should be the same across all nodes, and only the NODE_RANK parameter varies across nodes. + +### Viewing the Evaluation Results + +After executing the evaluation command, the evaluation results will be printed out on the terminal. Taking gsm8k as an example, the evaluation results are as follows, where Filter corresponds to the way the matching model outputs results, n-shot corresponds to content format of dataset, Metric corresponds to the evaluation metric, Value corresponds to the evaluation score, and Stderr corresponds to the score error. + +| Tasks | Version | Filter | n-shot | Metric | | Value | | Stderr | +|-------|--------:|------------------|-------:|-------------|---|--------|---|--------| +| gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.5034 | ± | 0.0138 | +| | | strict-match | 5 | exact_match | ↑ | 0.5011 | ± | 0.0138 | + +### FAQ + +1. Use Harness for evaluation, when loading the HuggingFace datasets, report `SSLError`: + + Refer to [SSL Error reporting solution](https://stackoverflow.com/questions/71692354/facing-ssl-error-with-huggingface-pretrained-models). + + Note: Turning off SSL verification is risky and may be exposed to MITM. It is only recommended to use it in the test environment or in the connection you fully trust. + +## Evaluation after training + +After training, the model generally uses the trained model weights to run evaluation tasks to verify the training effect. This chapter introduces the necessary steps from training to evaluation, including: + +1. Processing of distributed weights after training (this step can be ignored for single-card training); +2. Writing inference configuration files for evaluation based on the training configuration; +3. Running a simple inference task to verify the correctness of the above steps; +4. Performing the evaluation task. + +### Distributed Weight Merging + +If the weights generated after training are distributed, the existing distributed weights need to be merged into complete weights first, and then the weights can be loaded through online slicing to complete the inference task. Using the [safetensors weight merging script](https://gitee.com/mindspore/mindformers/blob/master/toolkit/safetensors/unified_safetensors.py) provided by MindSpore Transformers, the merged weights are in the format of complete weights. + +Parameters can be filled in as follows: + +```shell +python toolkit/safetensors/unified_safetensors.py \ + --src_strategy_dirs src_strategy_path_or_dir \ + --mindspore_ckpt_dir mindspore_ckpt_dir\ + --output_dir output_dir \ + --file_suffix "1_1" \ + --filter_out_param_prefix "adam_" +``` + +Script parameter description: + +- src_strategy_dirs: The path to the distributed strategy file corresponding to the source weight, usually saved in the output/strategy/ directory by default after starting the training task. Distributed weights need to be filled in according to the following situations: + + 1. Source weights enable pipeline parallelism: Weight conversion is based on the merged strategy file, fill in the path of the distributed strategy folder. The script will automatically merge all ckpt_strategy_rank_x.ckpt files in the folder and generate merged_ckpt_strategy.ckpt in the folder. If merged_ckpt_strategy.ckpt already exists, you can directly fill in the path of this file. + 2. Source weights do not enable pipeline parallelism: Weight conversion can be based on any strategy file, just fill in the path of any ckpt_strategy_rank_x.ckpt file. + + Note: If merged_ckpt_strategy.ckpt already exists in the strategy folder and the folder path is still passed in, the script will first delete the old merged_ckpt_strategy.ckpt and then merge to generate a new merged_ckpt_strategy.ckpt for weight conversion. Therefore, please ensure that the folder has sufficient write permissions, otherwise the operation will report an error. + +- mindspore_ckpt_dir: Path to distributed weights, please fill in the path of the folder where the source weights are located. The source weights should be stored in the format model_dir/rank_x/xxx.safetensors, and fill in the folder path as model_dir. +- output_dir: Save path of target weights, the default value is `/new_llm_data/******/ckpt/nbg3_31b/tmp`, that is, the target weights will be placed in the `/new_llm_data/******/ckpt/nbg3_31b/tmp` directory. +- file_suffix: Naming suffix of target weight files, the default value is "1_1", that is, the target weights will be searched in the format *1_1.safetensors. +- has_redundancy: Whether the merged source weights are redundant weights, the default is True. +- filter_out_param_prefix: When merging weights, you can customize to filter out some parameters, and the filtering rules match by prefix name, such as optimizer parameters "adam_". +- max_process_num: Maximum number of processes for merging. Default value: 64. + +### Inference Configuration Development + +After completing the merging of weight files, you need to develop the corresponding inference configuration file based on the training configuration file. + +Taking Qwen3 as an example, modify the [Qwen3 training configuration](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/finetune_qwen3.yaml) based on the [Qwen3 inference configuration](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/predict_qwen3.yaml): + +Main modification points of Qwen3 training configuration include: + +- Modify the value of run_mode to "predict". +- Add pretrained_model_dir: Hugging Face or ModelScope model directory path, place model configuration, Tokenizer and other files. +- In parallel_config, only keep data_parallel and model_parallel. +- In model_config, only keep compute_dtype, layernorm_compute_dtype, softmax_compute_dtype, rotary_dtype, params_dtype, and keep the precision consistent with the inference configuration. +- In the parallel module, only keep parallel_mode and enable_alltoall, and modify the value of parallel_mode to "MANUAL_PARALLEL". + +### Inference Function Verification + +After the weights and configuration files are ready, use a single data input for inference to check whether the output content meets the expected logic. Refer to the [inference document](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_en/guide/inference.md) to start the inference task. + +For example: + +```shell +python run_mindformer.py \ +--config configs/qwen3/predict_qwen3.yaml \ +--run_mode predict \ +--use_parallel False \ +--predict_data '帮助我制定一份去上海的旅游攻略' +``` + +If the output content appears garbled or does not meet expectations, you need to locate the precision problem. + +1. Check the correctness of the model configuration + + Confirm that the model structure is consistent with the training configuration. Refer to the training configuration template usage tutorial to ensure that the configuration file complies with specifications and avoid inference exceptions caused by parameter errors. + +2. Verify the completeness of weight loading + + Check whether the model weight files are loaded completely, and ensure that the weight names strictly match the model structure. Refer to the new model weight conversion adaptation tutorial to view the weight log, that is, whether the weight slicing method is correct, to avoid inference errors caused by mismatched weights. + +3. Locate inference precision issues + + If the model configuration and weight loading are both correct, but the inference results still do not meet expectations, precision comparison analysis is required. Refer to the inference precision comparison document to compare the output differences between training and inference layer by layer, and troubleshoot potential data preprocessing, computational precision, or operator issues. + +### Evaluation using AISBench + +Refer to the AISBench evaluation section and use the AISBench tool for evaluation to verify model precision. \ No newline at end of file diff --git a/docs/mindformers/docs/source_en/index.rst b/docs/mindformers/docs/source_en/index.rst index e95abdb95f..3c6ad51cbe 100644 --- a/docs/mindformers/docs/source_en/index.rst +++ b/docs/mindformers/docs/source_en/index.rst @@ -26,7 +26,7 @@ MindSpore Transformers supports one-click start of single/multi-card training, f - `Supervised Fine-Tuning `_ - `Inference `_ - `Service Deployment `_ -- `Benchmark `_ +- `Evaluation `_ Code repository address: @@ -101,10 +101,6 @@ MindSpore Transformers provides a wealth of features throughout the full-process - Inference Features: - - `Evaluation `_ - - Supports the use of third-party open-source evaluation frameworks and datasets for large-scale model ranking evaluations. - - `Quantization `_ Integrates MindSpore Golden Stick toolkit and provides a unified quantization inference process. @@ -171,7 +167,7 @@ FAQ guide/supervised_fine_tuning guide/inference guide/deployment - guide/benchmarks + guide/evaluation .. toctree:: :glob: diff --git a/docs/mindformers/docs/source_zh_cn/advanced_development/inference_precision_comparison.md b/docs/mindformers/docs/source_zh_cn/advanced_development/inference_precision_comparison.md index 4f1192a20f..897ef7aa49 100644 --- a/docs/mindformers/docs/source_zh_cn/advanced_development/inference_precision_comparison.md +++ b/docs/mindformers/docs/source_zh_cn/advanced_development/inference_precision_comparison.md @@ -24,7 +24,7 @@ ### 数据集评测 通过在线推理验证之后,模型在保持输入相同的情况下,标杆的输出可以基本保持一致。但是数据量比较小,并且问题涉及领域不够全面,需要通过数据集评测来最终验证模型的精度。只有数据集的评测得分和标杆数据能够满足0.4%的误差,才能证明模型的精度符合验收标准。 -关于模型如何用数据集评测可以参考[评测指南](https://www.mindspore.cn/mindformers/docs/zh-CN/master/guide/benchmarks.html)。 +关于模型如何用数据集评测可以参考[评测指南](https://www.mindspore.cn/mindformers/docs/zh-CN/master/guide/evaluation.html)。 ## 定位精度问题 diff --git a/docs/mindformers/docs/source_zh_cn/feature/evaluation.md b/docs/mindformers/docs/source_zh_cn/feature/evaluation.md deleted file mode 100644 index 1ff628fce8..0000000000 --- a/docs/mindformers/docs/source_zh_cn/feature/evaluation.md +++ /dev/null @@ -1,272 +0,0 @@ -# 评测 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/feature/evaluation.md) - -## Harness评测 - -### 基本介绍 - -[LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)是一个开源语言模型评测框架。该框架提供60多种标准学术数据集的评测,支持HuggingFace模型评测、PEFT适配器评测、vLLM推理评测等多种评测方式。同时支持自定义prompt和评测指标,包含loglikelihood、generate_until、loglikelihood_rolling三种类型的评测任务。基于Harness评测框架对MindSpore Transformers进行适配后,支持加载MindSpore Transformers模型进行评测。 - -目前已验证过的模型和支持的评测任务如下表所示(其余模型和评测任务正在积极验证和适配中,请关注版本更新): - -| 已验证的模型 | 支持的评测任务 | -|----------|------------------------| -| Llama3 | gsm8k、ceval-valid、mmlu、cmmlu、race、lambada | -| Llama3.1 | gsm8k、ceval-valid、mmlu、cmmlu、race、lambada | -| Qwen2 | gsm8k、ceval-valid、mmlu、cmmlu、race、lambada | - -### 安装 - -Harness支持pip安装和源码编译安装两种方式。pip安装更简单快捷,源码编译安装更便于调试分析。用户可以根据需要选择合适的安装方式。 - -#### pip安装 - -用户可以执行如下命令安装Harness(推荐使用0.4.4版本): - -```shell -pip install lm_eval==0.4.4 -``` - -#### 源码编译安装 - -用户可以执行如下命令编译并安装Harness: - -```bash -git clone --depth 1 -b v0.4.4 https://github.com/EleutherAI/lm-evaluation-harness -cd lm-evaluation-harness -pip install -e . -``` - -### 使用方式 - -#### 评测前准备 - - 1. 创建一个新目录,例如名称为`model_dir`,用于存储模型yaml文件。 - 2. 在上个步骤创建的目录中,放置模型推理yaml配置文件(predict_xxx_.yaml)。不同模型的推理yaml配置文件所在目录位置,请参考[模型库](../introduction/models.md)。 - 3. 配置yaml文件。如果yaml中模型类、模型Config类、模型Tokenizer类使用了外挂代码,即代码文件在[research](https://gitee.com/mindspore/mindformers/tree/master/research)目录或其他外部目录下,需要修改yaml文件:在相应类的`type`字段下,添加`auto_register`字段,格式为“module.class”(其中“module”为类所在脚本的文件名,“class”为类名。如果已存在,则不需要修改)。 - - 以[predict_llama3_1_8b.yaml](https://gitee.com/mindspore/mindformers/blob/master/research/llama3_1/llama3_1_8b/predict_llama3_1_8b.yaml)配置为例,对其中的部分配置项进行如下修改: - - ```yaml - run_mode: 'predict' # 设置推理模式 - load_checkpoint: 'model.ckpt' # 权重路径 - processor: - tokenizer: - vocab_file: "tokenizer.model" # tokenizer路径 - type: Llama3Tokenizer - auto_register: llama3_tokenizer.Llama3Tokenizer - ``` - - 关于每个配置项的详细说明请参考[配置文件说明](../feature/configuration.md)。 - 4. 如果使用`ceval-valid`、`mmlu`、`cmmlu`、`race`、`lambada`数据集进行评测,需要将`use_flash_attention`设置为`False`。以`predict_llama3_1_8b.yaml`为例,修改yaml如下: - - ```yaml - model: - model_config: - # ... - use_flash_attention: False # 设置为False - # ... - ``` - -#### 评测样例 - -执行脚本[run_harness.sh](https://gitee.com/mindspore/mindformers/blob/master/toolkit/benchmarks/run_harness.sh)进行评测。 - -run_harness.sh脚本参数配置如下表: - -| 参数 | 类型 | 参数介绍 | 是否必须 | -|------------------|-----|------------------------------------------------------------------------------------------------|------| -| `--register_path`| str | 外挂代码所在目录的绝对路径。比如[research](https://gitee.com/mindspore/mindformers/tree/master/research)目录下的模型目录 | 否(外挂代码必填) | -| `--model` | str | 需设置为 `mf` ,对应为MindSpore Transformers评估策略 | 是 | -| `--model_args` | str | 模型及评估相关参数,见下方模型参数介绍 | 是 | -| `--tasks` | str | 数据集名称。可传入多个数据集,使用逗号(,)分隔 | 是 | -| `--batch_size` | int | 批处理样本数 | 否 | - -其中,model_args参数配置如下表: - -| 参数 | 类型 | 参数介绍 | 是否必须 | -|----------------|---------|--------------------|------| -| `pretrained` | str | 模型目录路径 | 是 | -| `max_length` | int | 模型生成的最大长度 | 否 | -| `use_parallel` | bool | 开启并行策略(执行多卡评测必须开启) | 否 | -| `tp` | int | 张量并行数 | 否 | -| `dp` | int | 数据并行数 | 否 | - -Harness评测支持单机单卡、单机多卡、多机多卡场景,每种场景的评测样例如下: - -1. 单卡评测样例 - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir \ - --tasks gsm8k - ``` - -2. 多卡评测样例 - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=4,dp=1 \ - --tasks ceval-valid \ - --batch_size BATCH_SIZE WORKER_NUM - ``` - - - `BATCH_SIZE`为模型批处理样本数; - - `WORKER_NUM`为使用计算卡的总数。 - -3. 多机多卡评测样例 - - 节点0(主节点)命令: - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ - --tasks lambada \ - --batch_size 2 8 4 192.168.0.0 8118 0 output/msrun_log False 300 - ``` - - 节点1(副节点)命令: - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ - --tasks lambada \ - --batch_size 2 8 4 192.168.0.0 8118 1 output/msrun_log False 300 - ``` - - 节点n(副节点)命令: - - ```shell - source toolkit/benchmarks/run_harness.sh \ - --register_path mindformers/research/llama3_1 \ - --model mf \ - --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ - --tasks lambada \ - --batch_size BATCH_SIZE WORKER_NUM LOCAL_WORKER MASTER_ADDR MASTER_PORT NODE_RANK output/msrun_log False CLUSTER_TIME_OUT - ``` - - - `BATCH_SIZE`为模型批处理样本数; - - `WORKER_NUM`为所有节点中使用计算卡的总数; - - `LOCAL_WORKER`为当前节点中使用计算卡的数量; - - `MASTER_ADDR`为分布式启动主节点的ip; - - `MASTER_PORT`为分布式启动绑定的端口号; - - `NODE_RANK`为当前节点的rank id; - - `CLUSTER_TIME_OUT`为分布式启动的等待时间,单位为秒。 - - 多机多卡评测需要分别在不同节点运行脚本,并将参数MASTER_ADDR设置为主节点的ip地址, 所有节点设置的ip地址相同,不同节点之间仅参数NODE_RANK不同。 - -### 查看评测结果 - -执行评测命令后,评测结果将会在终端打印出来。以gsm8k为例,评测结果如下,其中Filter对应匹配模型输出结果的方式,n-shot对应数据集内容格式,Metric对应评测指标,Value对应评测分数,Stderr对应分数误差。 - -| Tasks | Version | Filter | n-shot | Metric | | Value | | Stderr | -|-------|--------:|------------------|-------:|-------------|---|--------|---|--------| -| gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.5034 | ± | 0.0138 | -| | | strict-match | 5 | exact_match | ↑ | 0.5011 | ± | 0.0138 | - -### FAQ - -1. 使用Harness进行评测,在加载HuggingFace数据集时,报错`SSLError`: - - 参考[SSL Error报错解决方案](https://stackoverflow.com/questions/71692354/facing-ssl-error-with-huggingface-pretrained-models)。 - - 注意:关闭SSL校验存在风险,可能暴露在中间人攻击(MITM)下。仅建议在测试环境或你完全信任的连接里使用。 - -## 训练后模型进行评测 - -### 概述 - -模型在训练过程中或训练结束后,一般会将训练得到的模型权重去跑评测任务,来验证模型的训练效果。本章节介绍了从训练后到评测前的必要步骤,包括: - -1. 训练后的分布式权重的处理(单卡训练可忽略此步骤); -2. 基于训练配置编写评测使用的推理配置文件; -3. 运行简单的推理任务验证上述步骤的正确性; -4. 进行评测任务。 - -用户可以参考本文档来对自己训练的模型进行评测。 - -### 分布式权重合并 - -训练后产生的权重如果是分布式的,需要先将已有的分布式权重合并成完整权重后,再通过在线切分的方式进行权重加载完成推理任务。使用MindSpore Transformers提供的[safetensors权重合并脚本](https://gitee.com/mindspore/mindformers/blob/master/toolkit/safetensors/unified_safetensors.py),合并后的权重格式为完整权重。 - -可以按照以下方式填写参数: - -```shell -python toolkit/safetensors/unified_safetensors.py \ - --src_strategy_dirs src_strategy_path_or_dir \ - --mindspore_ckpt_dir mindspore_ckpt_dir\ - --output_dir output_dir \ - --file_suffix "1_1" \ - --filter_out_param_prefix "adam_" -``` - -脚本参数说明: - -- src_strategy_dirs:源权重对应的分布式策略文件路径,通常在启动训练任务后默认保存在 output/strategy/ 目录下。分布式权重需根据以下情况填写: - - 1. 源权重开启了流水线并行:权重转换基于合并的策略文件,填写分布式策略文件夹路径。脚本会自动将文件夹内的所有 ckpt_strategy_rank_x.ckpt 文件合并,并在文件夹下生成 merged_ckpt_strategy.ckpt。如果已经存在 merged_ckpt_strategy.ckpt,可以直接填写该文件的路径。 - 2. 源权重未开启流水线并行:权重转换可基于任一策略文件,填写任意一个 ckpt_strategy_rank_x.ckpt 文件的路径即可。 - - 注意:如果策略文件夹下已存在 merged_ckpt_strategy.ckpt 且仍传入文件夹路径,脚本会首先删除旧的 merged_ckpt_strategy.ckpt,再合并生成新的 merged_ckpt_strategy.ckpt 以用于权重转换。因此,请确保该文件夹具有足够的写入权限,否则操作将报错。 - -- mindspore_ckpt_dir:分布式权重路径,请填写源权重所在文件夹的路径,源权重应按 model_dir/rank_x/xxx.safetensors 格式存放,并将文件夹路径填写为 model_dir。 -- output_dir:目标权重的保存路径,默认值为 `/new_llm_data/******/ckpt/nbg3_31b/tmp`,即目标权重将放置在 `/new_llm_data/******/ckpt/nbg3_31b/tmp` 目录下。 -- file_suffix:目标权重文件的命名后缀,默认值为 "1_1",即目标权重将按照 *1_1.safetensors 格式查找。 -- has_redundancy:合并的源权重是否是冗余的权重,默认为 True。 -- filter_out_param_prefix:合并权重时可自定义过滤掉部分参数,过滤规则以前缀名匹配。如优化器参数"adam_"。 -- max_process_num:合并最大进程数。默认值:64。 - -### 推理配置开发 - -在完成权重文件的合并后,需依据训练配置文件开发对应的推理配置文件。 - -以Qwen3为例,基于[Qwen3推理配置](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/predict_qwen3.yaml)修改[Qwen3训练配置](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/finetune_qwen3.yaml): - -Qwen3训练配置主要修改点包括: - -- run_mode的值修改为"predict"。 -- 添加pretrained_model_dir:Hugging Face或ModelScope的模型目录路径,放置模型配置、Tokenizer等文件。 -- parallel_config只保留data_parallel和model_parallel。 -- model_config中只保留compute_dtype、layernorm_compute_dtype、softmax_compute_dtype、rotary_dtype、params_dtype,和推理配置保持精度一致。 -- parallel模块中,只保留parallel_mode和enable_alltoall,parallel_mode的值修改为"MANUAL_PARALLEL"。 - -### 推理功能验证 - -在权重和配置文件都准备好的情况下,使用单条数据输入进行推理,检查输出内容是否符合预期逻辑,参考[推理文档](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/inference.md),拉起推理任务。 - -例如: - -```shell -python run_mindformer.py \ ---config configs/qwen3/predict_qwen3.yaml \ ---run_mode predict \ ---use_parallel False \ ---predict_data '帮助我制定一份去上海的旅游攻略' -``` - -如果输出内容出现乱码或者不符合预期,需要定位精度问题。 - -1. 检查模型配置正确性 - - 确认模型结构与训练配置一致。参考训练配置模板使用教程,确保配置文件符合规范,避免因参数错误导致推理异常。 - -2. 验证权重加载完整性 - - 检查模型权重文件是否完整加载,确保权重名称与模型结构严格匹配。参考新模型权重转换适配教程,查看权重日志即权重切分方式是否正确,避免因权重不匹配导致推理错误。 - -3. 定位推理精度问题 - - 若模型配置与权重加载均无误,但推理结果仍不符合预期,需进行精度比对分析,参考推理精度比对文档,逐层比对训练与推理的输出差异,排查潜在的数据预处理、计算精度或算子问题。 - -### 使用AISBench进行评测 - -参考AISBench评测章节,使用AISBench工具进行评测,验证模型精度。 diff --git a/docs/mindformers/docs/source_zh_cn/full-process_1.png b/docs/mindformers/docs/source_zh_cn/full-process_1.png deleted file mode 100644 index 27e14e5bb14815a03be6ab6fffe290dd995a8c5f..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 34378 zcmb5VWmJ@3*!C?UAs{8)EhvI?cS?7cNJ~j~mvl;xbeDiMLkZH|F?4qgFbq7G|9h=> zt@p$Ito3|?%ee-x*n6Mn@jH$krK&88^@{Y>ix)4jh4v?+HmBe4XsEfyVG<*5tMcB2xl(?p+@k!3hp9IoG1E>UrZEcc; zZG{@l%*=^|$Tq6T%(zl7(Np7Pm=VMawN#OjF?+;5;S{z>PQb^`Rs`dBydhgbfkJCf zW^1cC896yQ8Tc;6`FwF2{lp)n#bh<(c@{;+!oVb60Mwey7M`4CiWTujOf?{ggS z4cr>w;LZPj@I1&@(>^T~g||aB-?bKTNX&0R_HsvXM<#m|n zS@Fa1zo+VfK3{FrRxvgI-$#fmQ$)0#Wap9dArWoJXvB^8Q$-3M(tfx9{R3u-jOuVY2PG z#PLFxa&n-d;2rC`EeWIwH(j{IO4nXXJRz(81bdgu$IpGLMDJH5ofdxOLog5^=$?H( zKg`h4gKNP!V%DdVvQ$|24AskZ+Hb^sSExjMb(@>d)*e72eoMz|@T|o~yOOnO>#yGJ zRQW<)m!bD3O^}f%ctH9)oevRQ?>FmdEl-vjcW26yX%6%g`q{vbpLfSWRwDObI?Dqo z5hAF5SH5i$%oCCQd(+Nq$IjzWdo+>jRzkJpxL(gQLVHevwo?M~=Mr>3o^Z*-PRRMK_Zspn{O<=DH1RWpZr|#5=NRl~*D?+s#$-nD@u<%vQ{^v+6f!=7%(w zD8afEv-#Ajvh3!oiJqe^mw(%n<_dYeN@mtzB9$luS2ZzLZXV zd4IK+b-LP?4!q?ohgI)PiLh7qE*rzL(S+j}%o-x>AI(Km)kw-Pdtzs7|jXVOYL%;d;m-NEe+qb6J+h3s9~V zsL+^fx-ON8S5b`@{ANCs_;c7DY_oaet@F-k*7$qfT2=ZZ%IQu&vU=NDmSlQ)JWk&` z!D8TwNUvgf!_%MOF!Va(S($I=>j=}uN_O>D;||lm?FFwkqjN^=YOs>{YBGagVq{Jh zK)CO(h$gNRcZ(E$a2Wjl%0t5Og`LWm-6&V|?TAHmIL+*j_j;e7?#w|^_)Kv)UMeo9 zEBty}Q_2I^z#`YHtlejh9<+z>3fv1BsTLPUIocJi%@$aJW6d#TkrBwpSyZ zSW3UX45lEso<8O;)|!Qc=aCkyy@IGt+(FMEOg6{rn@H>g6}bk%dDwzJx3Ut=S@iE) z@x0gCy+(V&Fh*+49MhX1{=eb>5sI>+!!c zOMcB8xR+~Zrr&5=75eS=Sg+m>#&d&>j1|ULh(3ve^aM<0EylnH<8n=AjS}UY1oo+} zL{yB(wY4h+I@buFX8yM|0i3r-xMxPlSlLfDIRcHN8Y)_x5!h6Co7q;DqnTxf9@qO& zrJ)pNsq#}!(dWnH#Ws(c`|IhK{sgybiGhyB|a-l5IxPw&^`QLbLXBK=PFMO|# z>qj3un^3Zh!Tx^SHD&5W>czF57oW{ISe3?e*`kV_hQ6*e>%so+jpcAFWpWYB-v8ux zTCWx0ija(m`_9$**>=`jA3Q3#P`jg?VG!UDL!fN)1}d^5S>B3+tlb50ft%x^HLuimIgn5=IbObc zgEQ;irVpAf88_Ac{k05khDP%4TfWrA4UzJQ>e#__iPWD?Mq~oX9=t5wI!Y}hRH^UX zWePutl;*2*q+juHW%1Z+27@WiR}N=QD#D2SYwefaY5t1SszI|p|9O?Q)Cfv@#O89@ z8&lKD?eABejKu8?>ykyERN?Zw3zi!jzYLE#O z`5j({Gb?2Fg2^Gvlen5izoB&dz)@>>JXg5DtHbv`sdU_46HgTG%F3WmEPH#*uPHsA z$(_YPczH@v>OiIKSN!DMg!pfit41AE_}?ymYkXs{H9nX;P=wtw`E{Xojb1K6bqlDp zCX8l1QpGZ{nNw^}x@i9Sr8Av@LPyP||jmYx+f>+i7jRXbuA6 zaidyQqwu+_CBnXMvZP~@aAYPiD3pDX%i@VGoj{Vl34m-5|1d#=@X8~FhlUJ#hWh&a zG%lMqs1xz&^C1e_`R#lE%2ty_y)aVyHD0Aulsco;*Z_AniX%FHl;T|kl|MCn)Y)?J zSCItyd{rxZi08#-Mu*RHLhZOxHlOoNMgS72!b&jB2%^fSX9}@7~F_^^ARz4VpK{RQTtRkc)6Gaw68iS(X)$)yizWrwu zm1w1q4;t1C(wx5mXd_6r{TZQ@y5AKy>G~sU&~%fq(|RBtK8sDh6uKmuAXndag6a0> zr#8#=t|tjsvLaDm8F(^-LOwXHBH|_K>v5w_Uujs|NPdrLs({-oth33261f$^;!mmN zyH!mx=|Ex00=fiSUt&dIrPMmSJ(?H&QeG5s-aUt{c&;srDYOa4?s8IH{I#Q0rO&N3 zm8Wdd>?9q9hQaa9Z@mX7{A$c2o~Jv*^A6OnHaF8p>TG9q${jdgCV34sM7-Dgb!Q%q zMT&heQ?9ivum za%giu8BJn{mj0!M%Y9=&^Geo=*x9)TP zZ$fT}?^P_GN_TJ=_xpC&>&xMe(L(uTCFrTJYog>(N||fr4|FnxwD;V@X{>6|{O`;r zaBAbOC<))1x(a^VE{%gz@_*~&)D6Q=p~6hyXeV%6?4et0!(2`%qgMcHbKr)X^P$=B zSdI{Dm7vE{2*%-Tl5!)%F9wXcoj!g=nlx?fXTj`tGHUr|VT90^{_)@=p<#iHn_oox z=rb8P4&~fQu5eoxa8uDsS+XQbOATeN_cZYR{#dg8^bhqtz!F z@|Bv7+e}-0%ARn0Bx~KK;bY)QMgPPOVHqA_>35)c5yLQmMt~lS^F&jy4qxUbEuU`lxuZMe8|n&&8U<~jkyn9c1TD>pV>Mo9VP_kodwV#iqmTl z)H`jI*ru3%ooCH>xIM9dxbe}_`Oz1NyQ*2GuaukdCD->-J_qySZmWgC-}P!tu-8q1 z-qgBOJS1G;BUEg9f65u3QU$)d}muZ!2^V3#vgjy|M>X3Tv4YPXW zgRvp{isZhNf>Ao;WOD=P{eC2xtOz4zH2H}^xoe@kFf5EyzC0sTT6Xw^|0>!wMhu3P z#iMI?@O`=Euj6XV(WRxk#GXF+wD&FHC5jpP56x^gAWr$DLKaq_oh!uHU7QPNPN*_y z(tgJ#g@hEiupjc7OJY>!yH2I9{KEvBb#I`j88OgD&PZP>g1mM}I(L~8UyWw-*DK2nCelUt zv+cZCvG|O;1`~TmeQC}5>`uk*$b==jau9J#c2|j2kLHU~JtVLU=mvE(>LUF>5-MqR z@xa0egI3o%d5}nl*YQVCxk9>9Z;AJrW@?7b_r8d?=mlm-AcO-)nzhj->VyUyaF~`^ zRJ=J2!n+*mX3aA7itk^ql=#4N7ygfOAx8YVi+z0DX-@7Ja*4D>#Y%?5?0>;Zs0$6N zK&zYU*(E}D+!Fb^GxjX_K>hbK3gyMahO%e_q|3@aU3#5U^UZj(7G00^ii7TE4;9g! zNbVkK&x=!N^Ocl9+BwpOf4GeKm2763t^`b^2VrC7Dw=LZsg%bLhB@E$&^C?jt|Ek! zo-1f9g#rl+85D1SCY(3OjSzfopofE&lwhI|=%5?0iGqYI!H31qh2vh&M_3r4uaGO@ zQ$oh`8qCAi=^(c6b?Zd;mA~8c40+>U<;OkMqJu^rJoSU?KA`serx|hCDK}5w{HIet z^x8@iqn*+J&jDi3AMxfnYL5~?p!}1i+MD?-kXYt||MVnLH~Ty|CkB`Ff6ndtWP~;! zw*|5B>^ti2H9}0bJ1unM$NBHnh@hFq!^<2V^_Ksrfd_(z!69%hfDR6+^)u3TT($r2 zOB9Hq(QO(oKODo+;fcRcvham2`mF7uUrI=sP4#! zDMwynVC;HS#7VztHhl_$;&vy_w{aJfVLlHgiHS_b`LxsobzmPs9o8MdP~}4W_bQJl z;j6D5Zdyz)kgf;`d+?teJNLkd!>p?cVrPs`_<&gN?gdB8@7=7^1poJVXy5h}@yIs# zM*z5kU@jb&GW1isL-39OEh)B;XQUh&{Yn<@H`UEXslz7*nvo4kT%U8NJyY~Incq~3 z8lKE>ol5)2_FrVhd;NPZIU!kmE#ZOpjbjg}y`quxO119miya*$ST1zuYXDuz>s5+z zkO*^H9ea>8SwY=I*=)3}W6*9(eG9%7AVz4b`G;ZzezIXGWb-k)i~V$0v7dXh)*a*A&CPok^?C;Do^6%|BVB-|P8_5nO8~BQKmh5_tzgBhW8B7?WU3?3v z@3V_a9c|S6fkx9dYn2s<3Y<>ifGAV0Z-bG85J_ysz;SWRnWilnIgdvVc8J2b_~u>- zoOCLp3=*;bVANWOIQ_{?VI|FzwJQAW6teF3{+6f(;5MKr6Xc{dv)`P9Y8xJ~@wKTh z^go_<1N;Kp&Mx>#g3dTPUv09hi@T{Pz{h$!A}~sX}>l|DRn$vD|iZ z=_;Z|rkj0H=1KJOo7m`7%GrEomLutw?x!mj>Ltn+`LgjF*ywS?KiD!5cD+r0vVZ~D%yG+1!|7R41p!?sHSwGlqxk;t&N2Qt4`SCytihTh1 zY*Xy*WNjXjs@9qhxL@^Ri>8O7;~xsBMLTxt+5N*JNlYpZn*;G@P%J8cxLZ~R=N{dt z#!yOL4Ro!^5p;i8k(%V_B1uxCf0LT#SFTy+c)<EY_JfNAp>5*mJ@+6?i-aBdq|S z7fXhn3S{H`dq4X^y^X~LBOCe@%rMD#Mli|w{=Q++oY)570@bBT45jeYm7*-vB*{j3wFoIc3QC>efDgK6{UJHb$)C-3S0 zWC73}-WQvIbw=k+YShCAp^38T$j+tnKF&O+t36dXXIRVYVh!#e&jIBd7Gk0bz1xb1 zA7 z3iG9&9Tc7CN?+@Mr311pJ&t18Zc%*X(q{K~k<;?-++$=~Wgd4EF4b9-*9rSWBnWfy zp#vK1%Frx+7eX&+$!=LmxlQXCSaiyz-rhmR(Rk~GlEcT&_tO8 zdOJk?7FwaBr*Znu;L>aRFRqfP`!S#The^rM*Is999VK3;ExV+YxIri~9evS6D$BT# z&@{_s)>;S|bL0<{mG*lVT6_Oj8;4wo&1B0lhMm58Eff|nfTzzAnrd8lXsgA{+(0Y+ zO0M9rOv*Hh+(x?K*LaXDWmGv{vFAnp6njaO=G|(8{c?Y{Xn^vT8t!AU!VhEmQEvtw zdhR|c;}>`h{%{na51sUe)R>si+|7?niP`6JVC+e|Cn9U5ACFqjwQJ`a{fYfHtFvv>U&>(T(!^} zB_oJtYm7St@EpPA%ME}|Qhs+qw*hp=B)zb`)A|jh?@nM4g@THh? zrQChS3CkeX1}yI{cZ#(t=E^N4 zQq(`_vva*!##T(At}gYilgfY6Cgu6-?wwk^c$>;V`;6% zjxGBlCyUjKzDv+IrmA6+T!xpa=H)+^k=6M?&obu$`1wkFhA=q0$L+ zYoma_qIEXY@t;&h;SJI)9r1r_6bxR}0r)!lwp4E5$E;}zFwm}!YQVO9{Y9~4yb4q| zWHyy-@+%RA2(PC)9RYl%1QO}=JBz?1d2hSpGle>C_xtPOSAfMAh9*&q(=x1`2VeW# z95O<3#bzh%m;JRq-#=dM#?ab?9xwKC+1+Me55h9oI(2Cv=f7?h#B&60b{4l74StieNWZPm=65OI4*MGg zoi~{;Y)djg<1FR??yER+s%RLH9pVoUkv~4j0@!e*)y<9@Th2Bn48S7EXF-z>hlnyM zAB<2qy)NI-oToe-hs4QoYKFc!oGMBtb={YAKbo8M1q7GvN;>bE`E;Y5)k%cg@m(<2 zLoThG|COkXrk0E?bbnHEYM<5?f_Zf@K&2q+T`f?3K?mOhIaaZ-hhkXk@@ame=Yfl& zWy>s#5?jc``|H4x9-khsRF)zEs>`XyB%7&L z(4YR%e2vq5ptHudm`slKfvV1YKz-OIjy&F`;Id_)kFCBq>jEx zv*mK=${!}tY|a6ngFE6i#pS>)uFzY9-<70c0Kg*uoO8R;`iX*c;_U;Jin@Fs$3o~j z#LFZI@uiecb7f4vM#+b%z3Os^v?$ePmzXr@$g-;^S|))}Nxwr@SmqV&LHEVhz{pfl zv~=6Oa?#tfFqh*n62` z5yUXqn)DcJU$&{;;A(fElU!hxFWro)eH4b7wf$EtSK5xwM zHuROaWFg4!HlsA}YINBVd%BrPCHGB0r0Dn8Cfp7yQc=Trt_{{|jo^K{buujvUR4cz zdbzi@P>GYQxiSq|EcKWZU<6E&118o(wSAabM*$1d$Sd@_?PM+2r-0GUG^K}O zsQ)wEW3y8SC5=&ulsnvh;k>GqL4ojyH(&Up@0;{-RFcG9g#!b-=Nr#R zf>dYaa90kCp+Z~x{umOin?}>O=409Xwln40C-NG_ih2iGsUHbgGkHAd2YNvH-ZQ-s&+VZVSsU*DJ=_}0g0A~tQt&sB>r(6OwWOZQFu zThxjB9kjZ85lOp#Stv-C$4_4y``9PQ)hI5RRO;WzkBUM*L5PCAwi6?yu|}))!?yUUKerCKC|S-g(^v3mvlvJCkz#~mP@MiGPiX#iC=E@JnJuz7ld ziBM2E&X%_Ttx4;xeFUbL5$F$}bfznQ9Hnq^uckE@U=T;G6EFo_G33A}HQ8T=qKwk^L` zC_;$*Gz$^nBBVPtUO>3T0l4Bg!trQi|OV^-AX36-M&x4G4W;` zl_+lhN1^A(D}m#1X(~(d3>+<@yfnzaBe?V*}j{8br8R4h3>dN!7ELWU0_}jFC>x5 z9=A9Ab3F5*iD%&z!O+o+Tp{6xYyZ7<+E z?{uDK&VB`&nSMN5YK*p-JJ#dwYAJE~EF7M}p8bc9yH_1>i00V)>$du$ip;wJ=s8)9 zZp$6cW@!Y6L%CUh^hU&&QW=q8CSVZy=8wyw2%RVKiU;52Ti#6NQO<}4vV;%)p%2ol zw>-q8vE=Js^ut=@9e#zu^z&j0Cxe>Z_ilMf%sECWUnKvM<8{?nZfrNg>T;D|%VXDmmE_ShNAG1wG-#Fn=g2jgjF zSY-?i3X*W-Kg(0qDP~L+`#>ozH+odKu{>u14>IFZDogrcGE;{44XHJV>?99xF9cq4 z$S3vf0!FP|ZhUE)YOAZw31tG+!>QGHZU>F1A!lANI+L8SWsChB8?U}CiFMN0nMey$t%hL$W?6fS_NjicWmlc8yE zxyIv+Ci#maB@7qGj;B|AToj^grKHA#!>ssbaJZKI0Gsl$RUcL+dzj*&S{U@^OMr7+ zsY%?MaJsPVNLQ!AntedFF5e%NfYAKcoq$HK#O+ACoUQZu$;ThEPAux5-LC)=iPdOz zl`r|aICc1)i0v@ty`JIYCw_||p$*M)%{BRKz94`kf0ad(1i`nH@09^fpN)*Zc?bs( zZT8ni4naU{CH(5#jzpoJALRBsDvYmoW(63*Pq1VA$C)TfIIgUwaR=(rT$Kx;j*gEP z>z5asoePrRstN4@3W;B0Bo3#B)93cREb(`j$2;deAp9`~g#_%y5Pw|xX9VL5x+^l- z&b)n4PVCBD1Cl7&qglKMjbz&!z_YYj$&-vI1<468$tOwC+<#CnOsEIo0v5iTUJo8$ zqusn~e=H0H$O+8Yy(l~| zv0`xP>WEoh*|X%m>?M{{(Ntn&rcO+U#0(J~Q>3)vMBAe2b~&OHGg*%3YE4~@ihT>o zP#D>l&<7lFB?t*=BU_TL?=^p;rA=tfR(YI%Oq`h?eg1P@dzVL6|6d%(zpo0%v5*9Sd>i5+|0@^z z--*%x?~^~mpyiQE+^2mGypE4DKaveZv!6*HPvW<4o94T~maA0#v=48eOibF3Htu95 zug|BPp1GZmxu3kpMQ@-xLF{Q=7ZnyoPfzS_opzu$$_NP9b1a}{;~Op>pd;_JXgIN zfn&yDd|KVBY3ihk>NV-9d=}kWuzQl3bba(!A9#q4cw-U?K9kKedwlShxZFlMSbS%a zUc7ZVE$S}{_V~+21?elt*?vV|B!qYGs!j1l{Kzq=}6_-Ng{CIKA z@2vAo`}*lJ8Wai$KJcg(i})p0&_g8GcQ4)2L_64Uwcu^8Q{-&FRS3cj6ISf_ndAek zHgx*e4BIRJIKBYV%;Is7^#%O-DaBT`2Kw2dwisP z=-vv2@lkRQ zIIsE#ZFD;^Z7w)@7>8_FZmerhqdi)#$Kex$n!Bf7C>HhVsg~T57noY$q^Aig*9yIS zc&RWQn-vyvUW|J;+U-=&*OVz!>-bu+*H2}owuhgW|NY8U%P0_3nL?4+45IR`fOee_ zy(bSP*t4`jWL#^*Nc9mT(LIlWQ8t6>(5_t|5L@5qWqG85y!BHQM2#J5{7wd;oy|`0 z>>c4VPwa9P8BV3)O7o(y6KoauGtgXC7o2M)y5oE3UV9abwJ-_8C9H2k$tRwQb7ZqyC?d+kcN|>C=Y69K&Ftln&1Dic~0~I@KU88W212C z!T~cq$RFtpaJUPBvJoY807J=Tz!p+ia^C7shCaGfK8|EelgbF3o>3qFO{7b59Gcpr z;x3Pu1G>KcU-ZiPQc;RPU<@wO2?g;vOKdR!p`Ah_edKovor!nOn|+f`WD)HYLY`s} z1>p_ZRh%(^`q|uMa)h9K9*C>D$Mi*o5~kSc-l;I!te|g(6*z-^?>UkM+(?0?|3g3| zHWzgdkRz#}$^d8|EBk4{{SHwwz$@;>%%@7mf7(nt1>9};*nb6dE<3~zTE&C8{=snS zJ&@LrPiIdQG4DMW^?yt+teXUSPpMa2o&F^-y;p$Xukwa3n8%0Vupg|0ShQHY0(?iY zP4WH!U@V%yOa7HXW~0_n-*IhxPD@2UO2w?(JiT9E>dMp# zTD^8N`gw6C=;Ecgmz$n#bA75a3OUCBzO8eGmM`Tcw>0%~y46jgKHwQ43RaskN~b;# z8wHvc08KB>jHfZ!14$=X1{0{V8cGPn;bs9+Y0U|!$z(wfN^lK!`>@fPDG+zE1Uh)j z&A!3sbg9YT_ICC5%QZuV$~gkeCvQtyfMCvyn&s8w)p!<=LdXOPbOzw?rXwJsWN_I` z4ML5r+PmhkJl7+CC{CLG6N=1t%m89WV077yDK=_z&!p;E_g`&;PFOK2lRNr999T_* zpI;Q zM#mow3G|br!00NvWK6f6&3#1!Jm)ssxk`sP3PDRCBKba*lRfljpnc`Pa0n2N_{D74 zTx+w`knsSJDJ7Bt1-6^li7(hoM4#?@8yq3pmrH6zM?glRA^5$i>dufBJ76~akol$u zM?@z;epp2LCfGO3S)uJ`0g!$=$YsvMMF%iBCNGszSz5TtdnHr@(sAiz(gDJip3P77 z0|qblzA3*Pv9vdwK^oWi0BDFAE^4_obA*f2wGM?&aEBM{1UE@}3}^FY_vQT{)79t? z6MyC#uAws*3NoIr>;M&+*aOBpL(F6L92|hIWp2xri!PZFz!Xu~zq_E9{Dp->^`V2xXrxlz)eH_x)g{C7S1r(`KK1 zz000*BC=MaS&+vX$%mKFB(6fzhi)f#v7JVTRT=a!lpJ<537WnwT0iWp-U#fYGoWW{ z2uu(n)OHN*F6RDYw-zOar<3k*Dn zL0{Mj%0H}@h^dv@O(EhdcR4MAc11E85KGEkq!zu)))H)lcV|4phMmQmc73h=`#aw!Ca4f zLM9kP3v*Oh(*ln+>2nB?V@m6%wLnDcFTFyVB3H(O<@x%&BET*i93F0t9KCOS<-vBl zBL}3S0M#zP7|DXx{cCe=%@^yfcwb@=Cbj0&+yuGGhU8P^Cz-Tv<9`M$ISoVyfR6*< zu@Tg@xz^=*#{Hx*s9@1OYS3W)A-gq6`*9D%rsMXi{dS4E-eQ=Lxssg`6~TJZGs12RhxbI3leRzMFeKLjtshK<<3wlp1oC(yX-&o^Au!S?=|#-+6#Jo zuS9Q3#wr>K68Q+pWHa%X@{F+Q;SXELt& zx};kx!Zd9imwy`#Z5aVL7q!@YfJiF(@F(6gIHBF}nrQEDEM4f4+)Mt;TDsJY|4&H0 z#+TS11F+=GMWx#mkz3(LDBoi!$;!GX6__xZkku71QV$}Rjz@A}iI92JH9GQ%M3G2N z*L@t>$U*s3mJAJ`Tw)P%&)PsbbGBi;9jHJQ*&UG18;YQ(JOEzFGXa5YEPaR?U}5CZ-fvj2pSD!>+@! zZD-2`@{M7B%yH}l&pBIYxcpOMIXQwJ{J=zUm5Cw#;H9$B7e~<>MZ}f{EVPD$ zI*9kNTXGZ!O_BOl7LqM3!??1Gt;0)cV0QIqGQXe7Dj*Sfd~0_iip3`*tFdhHtMy@7 z31*}%Iq)p#d%-Dp^M5MeX?qWI>j( z0eCOhXAUSQNyb+nCIACSHa*3dO2Ae1@TA!|1V~Hqy~NmcyFI=)I`cSQs6J~*7Y)dr z?(iX!KPvZKPL6Y(CZBB0=CRio=&3AvWQ)+}ZO#(sQj{ewPFd_ucqml-se++U@m#iq z$<)0--&c$h2jHPh;F*5czls8Kv^8>lD70p%XwVh&!k*N&0N9DQMyV=7Vxdi_c|T4Z zG_Sl80J&)*`jT23sv5xZRYFb{0u0Z0N(C}X+Z3Rv6d_jf$IwDIL(Sa+TK;%urgyrv zwQ>;3%j%9E{%1mEpNX%ltwx3C9{>USp=FIR>Dt5Pa$8R=$|tMAMx%@c><4qdx*8X? z0Y{k>-a3Ne#eN3#m*b+(U;Q61WKVd{mYY-xPwIE0m~ziOUcP?yqr)2DIc$DSR9x|b z?vIueufFXJQP6swF|My$(+YZ=rg<;e@i;!oKRnHDuWr-CTXQ-qg=30Tg*kpur3T3r ze9?CRPa8I9SZ+}m^@1zOSOdta_sYnpdWkUZ>DUC|N95l;zMn^m89 zlW_ak%8>Tb3$V%()0RO2PYlilHhzG$cZf0=o%15r@?h%Ep{NhfY7Sg|chvE%!deh6 zZdp6vFlsbdBi!_MT#d=qe_bTXXEjWH5Tr4xCMq2DV>{V*zM4`HmVODMIjO1LhT+b> zeNr$yYkx!)V|cjVqmtGleV?k<)qeNc>cE2`HGEZ#RGRTbG~at_T({mbhZN*}<1MY3 z%&g9jZ3D(_#L;rU+%B7nzZYk5D7)1-0Gna*yY2^;iq=Gazsk2+{1tXRYMa&WbtNyk zS~a&041(yTCPUU0g&!YuP}IuQU#f9`^rfG@Mj%iKG5GbRtke%|XU?kkD_y_zfX5Wq zZ!0#>pJ0#XL?N0USK_QFoSAejja9V#P}UrIj`|O$E{A*sN{CjLRWSE<8$u^Wp~4US znI5C1(q|~R&bD1Xizf$kJ-fNU${!QS$HlE>ZOebW~WDm7IBbvar zxMKmtfxZY1lUIo$y95^Tgo-^4CWQU;03`;s2aqR4pVUYN`(?vb0eDmcU(dq5s0zN$ z^)3$=s4i1^7i2?rJq^Th8S$Dj(zxw|oH`Lq;}_aI0^~%gOc_ezMeXLk!p8t3EFCcD z*kam~!ms>N#(#>}l+BWyl*%Vt?Fe$(3I4)7H{KN0%0kG_`ZN$2_%XEKxKx!!2qr&+HI;I2DJ)LH>4`tO-7lB}UI3hK4Qzr(SqX!`DcJMBDL&a!A$ z$j-bC`VNF%0?G=X)azIq5m|sO2;o>h3YwLRv`SE0^U$ld*^j&{>OtX>q+0{eDun@` zZtAh59-Nk(?nKB5)&SoywYGa*VV-wz+n5fB(OUw$Sy(IpZ>~@gW7q>okoN`Tl138S znlB)iy)wHzA%5B92%(YERBC<8xkDD?>UMZ=P8=%wbwO3WCdw6K%$i~=GQCBs zUJgnAFrmO4%AnO46RXjC0R7u)jB(-GzDH=8BUJ2UynjsJxm0ea8mm zpZ1n&6y6_lnh_j2oD0uR?J81bAi7Jg=9U9cG3QR$sUY{;FQb|NDr*b=J8f4crOXs% zpy*`-?`lg9w3xwR@vB(v*B9p!of=BzaLj{jE0FzW`os8eqaA*0 zKssP=h?i;-YY$|hr?}pd87TWUwUImLkw~xDARcnP4fMIR(8gIjo>3E6Mfg*u;k%wA9z@5wl0A8r_KpX}6#*Ft_ z|7=BofW=TsQqO8RwBB-5m65KzXf2snp?%8ARlvhv4FiJr0dfwk`;w7Fz|t;jCZaaO zvD|1!#N4Yv8h_cU(|1&XVRDa0HQ+H}9n247ebir%2f=*oh1KFu`#m^u=J84yXmR#c z9)b?OosCtHh2rLzWXUe)`kILZEOA22z!4sagimRn%$(4s&2x+j2PfWMAm9`r()(Q` zUuE?iL6ljwgHbeyMf2y!-jeS|ARWxFhF_^!Am1r8xijaw zyJ>+-ZuvH<$ZVet{*jkhF!gYf>#Om~JYkUkO2{(F-pkNr6e!PXmj~h_;&7kenKCE4 zUv!siNXI}=u{b#_mhd;G6AyOx(4T&UzItbb`BV0F%(CNr3y13;HS>N^7*oyJ&Q@=tFl|^y3F$S&(3u(yp z!IOxz;cCXuw`64d9l>yaoV(RIJyDD!DoW&H+&Mq?g>zH1QAq5>RMS&l5pE_}x|ZvV z;K}8`xai4$TQ6zfo}yjo&vrlQ6Xn7t9(%ss<^;`mpN0Ui%Y%;neI?A)|I`QSru#vc zr1@-hqx>luPpMr;AeZr5A{*C;VmU$L0PM)$~8#c#q%$Fa>hp25PFia z#YmcQ)0?_{aI~{cFT5gP@SrpLkrSddkIi&a5)%3InH zbi}h>176^Iw9kUt{2jD;*R|Jz2mtQrV#gV!-gP_QdcG2|OO#fBhS~nwooWQw>qr)4 zW(MHFe@L+RTlY|x-`Q(m`|Mc=4SPqIRq!gRPya*M&dS+8P)L;P!;9Fv0_}cuxDiRm z&_)UJ=WS~`Ro(>rkAL?q{4ZwwKat$T*ACn~s#T5txClVJ_-?+|PtbH;C*oy~q=6j; zVdZ3h=Qy15D5AH^oC1aFGPHht!H0Y!D6Vng5v6m>Uow=|_%=VkOA;wnf$wEpJwYUu z*&ITTy@kR0o&)t`>xMGOY06#H(d->n#Lf>NlgFcyVP_Rjsc=@dG2G8~2Il0e4}{NM zzv|c9NHR=4l;Yl?vl>%J4xm7VhKoIcOKs14&#$mQlEEo(**(_=bdnRR)Qd53_2GZ# zTU@LfM;7NWeo0cykOb6d$MpA^cdmVfsRq-O-Iu&#URYmyX@nr5~!mT$c$d!6A< z{9EDnGFRJ!1(MWV{du%ys<)rUa)l$p>uyS7t;pWAUDR`mMz8a4zXL)y-f1|d@fNos z{fgsrps%vSW6A9yR%MFHo~G|qhYq6II@tknnTp~JFMYFlX*?8SrRzIB_V{n(k4MwC zRLAwfw4MI3?q6oC?XYGC)68_w;TwzcNX=(Ry<$J9Vlaa@Ol#GPJ{?SQ z06T{`?3aJf+WX$SZ=!M-{yw*08l5cQucA(aKzFSdFZox+Q#Rh*T2k_za|@>-U!TH% z;h25aXgt*L{w@iGR2v=c{X26y>Vedb`FBFr>gBT~p>Ey1k&J?q96!+D86-eZo#_Xc zGLTk*GqD&_ouX^s0t2HtSqSuzOv$S#J~2Z>){v2pV{#ZSP}_2?TfX zD@1LH@g;mm1hxPtQGWU3UciMHnb6R74h7uVf9dzfmTT zZ@v`J#rd$>c~~HMAfbK5q@IZPajJ>@@jyKaRFkZkGL54aF)kjStQupuyzslfltYcy zlM<_Q(7XuBxxKXadWFac-6n1B9cEWp%r^;x&U0qa z4KR4WE-W~O#u|;)9cR-W1osAUN|pcwsK-Q+)z%4SqD0a za5!iY{q_n|(=1gL7wyMmy#U@&6%*#^LfxqE!vUg~NVo%ZaIQ$MWvTJ#Ew81?XS7#6 z`Qmj|WoIwS`B8bk~8=}3PTX2=FdZ%ahs*U2_BrHP-slTs)k1L4O&hGa&D$K%ux*lB|(OcTs$yJ$I^dv%E7W;WdKXOFu-AY5zl zIVivy-&*V^2qw~SH-n`JL@-;`dG4=~(Md@l!@sKmqQq4o6N{IxTY)U@&(|EAOh}z= zH~%!HbPh~Cihv=`<0#C6LQHbA7lI@A_7PLws;P6McfPA^Mc8O* z3PcZkiT2|mKgNsqzQk|q>EiXhi;28(AOzSxme)4ln^#4p9{PA;W{9*FO)6-=g!ylW zIZe{2`hHc4UH;V5#ehGsrCTsHO%pTAYQ9oW5*xQ3E6$vhpBus9^}yDMNo007hx@VT z>f4-#g|8IMSQ>fcHuPNYFJ-ny(pW3FzQ6SL9R@O=TscH0$6o;3+OpOCgqCGBd1-$V zQ@_cf3N!GmK!O>Aw-m!b@6;|@Omw;HuDFfaYw?U&9{AkeX1k;(g&s(V^8kY{lf5Hq z2u-}kSopeBTe)DJJ_ZV*dbjgNw$~vX9lzhsFRu!j7N9cvM8K}xYM($Lh^hun56A8L z0&KLl20Hg@Hy|Qhtkegi2xUzKN3isvC7$Q>zFY9oKdLJQ$}BnE?kf7}UASlIVGy zsn=-B4dk1r8yp23R&*_r0)I)j9ZcjEbjDwWdJ+z_l}#IWcz@z8`BlF1wS7E_K%>OJ zRDWxj4VvQqMypjdgu4S6kkO3VM)7?#kiy;?dyfV`o@^?SM+*4d5o8zX0%`ck5=!Wi z*pKygFhLnI)HhU83ax0pVQ9p4IrZgAhwi*^#7t zto72`K#a^delk9f;~3w ziD!?30K3;{9{b2cig8lw8ZOh)-fVv5fZVr#pMVo#ZNn~XC@=U&GnMJ*Tey?ei z2Qh-we&Rt_t}$ANO6MLc=l~jIDAU9HE9r%J{YY4$mDyhB7R3|DU5*sa zrEUhBV}*ww1m-Xn9@G09;jwZtE#921wdTmxfJ@N7CH^{zi3_l|G z7!djA$5SEx+r@~^=rc8^@(IDVeD@YA)`x7a2A3Vi2V(M3O_%3e97hjj>Vz`N9YBuN ztghE6UbUPd=KHI-kRyh+;7vEO`%Ys4V%oW$)aM<SMk4)~NZp>M@K z$BQ(MDBKQK$LRzt>{GIWbmCYv?xBZdInZcf8!}#24#vq%OXUn6s@35h|Op;C(dyGRYg$lT%t0@**zOc4Dq9 zt0Y@3RKwo&L^IOD928DVo&p#vZ~V?^{u$)~kU$0}5?S*oCz1po97R8D3gg{2`@rQ+ z9Aw%~sp;9HM>AJJ7fG&MkwbS2UhY`%su>z}jD+ni#lF-h{Nky`8hmBpzDP{t$zJ5)*gO;8H{lQ;1GGSqjz%Ixs>Le z73n;2kfuS~yD7L6{=8EQcqm@QclCxpk}O@V#2YN@Nqx>lICp_d#fy#KrMfu(mTK&7 ze4*=7o}-Ct!M;w)`EdpBPEGRF2fI!*n?NbSEjbKW;1Mxj&%W1YAb zSP40I1=4t6LTxM1J?-+l*!oqUqJ=|@#HL%5;uzDD_;!V5-QlaOu{G5xUME)?uE4ze zyg2EUxBC$BU_Hh4G9ufn(?S9|X&y7MdfcO$W%>G9gOnNM<9?oj)54>`B8zkU^<&Ri z%<7W|sRZMY9pXJ&Lnwz2_C%FHy6k!5=FM9A%fmkaDJ5MC-=^hZVMcR zzO%}9Qb`!{1Lv?!IgtUc9qt3yP%OOF)%9^O#v$Q)sur@60$`Ah%-8LKjhPEsd_p5;BU>~Q`jecb zHk)JJ)#UFF*S}y9RGAO=ok6qNU6IZPOgTDy-6biWMFvlov)_Csy|Y-0+Kbh96g<2f zd@`_5<+~yb7jj52PY?8mJuXFSv76Y-$!`@Pfu4T7$dgdlo!^U@32fLwZcZ$Jdgnag2=lsTh7IY;w~>yse|JV#GBlHR!L{c;5;{LF z?j6Yw?}+g4_#^vkw{#46WipgYWQ<2M-*xr#x-c=}HGjBOP*z$_&jCgl}H`qUecrp=H=d*)zh=JLhZf}5Z zr}Z>cyIY(=t8f)HqO;zxMXU-{3K?y}-g?Qty{Wx3H&bV;Fd%i1OuOYo7YXl3b+AQusKuInBm1WUXMe^GOZF7f{J7u8^@W-71K zOwuuH%2}P~k`lXw9_dK77sE1@RnJs*Z;>Rubg{-1*IK^vpC6j)dsavBq4F1)GioBR z5C>6NED^UEozqI-`tJ!Z^u?PhM7#V6GV%Cm;qq!}@D3i^C~mP0IYjFp!@(VL|Bk|w zzcb%Ph4&=UQ#;fk4+&WYo190X>mp9RCX|9PHo;Nedy7&E-JgCqzQx3lYYD{Kata=7 zo4}FyP$xqbL8<0q2ml*KsZe!y#d?3Pue;j~CSy_OMvi|X`1Rw@%Pe~0*ll{^ISAAY z@bK^gT?++TavU@_AZ+noP^riN1LRyakh){m93ZQx@a;>Ry@T#}`9S3a_rfIV&4WOd=VsGT0{jgF)PV5W zYmFE_82IffsmoJz`fG!tCaz})E~|Yc3{xWS zkCT?Vl*0}b$H9h58W?%D=)5^5#n2Y|Ae}7l5-;kPI@@Cb=f63)XY+!Zy?td+dX#Hk zw~H09w-*p@8IoyEP1KK;vyon`=!<;Z%J|eET|;sr5P|w(Z@6~Ak5C%ZL?nStL&9Z#t79m+~m*(c0jRB-1Rk zD-^~Towk~Ip~pB8LamUN897~Jll#I%TiFK79S;l7RD>fSOhWM*(x;Ns6?g25>_;7- zX~w699_x6`w)G?t*M6nvjfQR_$qz{k#;Q?)Q|UZrS{mv$c%S9r8($n7I3Zp`Q_1o1 zU8jv?IbOZ}z1o3*+#Ts}oCkqbU6EJmr1=Ty;SZw6nb}xhiwlK4gomRsB<=J1e4jK= zm`hR4L@?}9*8SHJv%ICRos7I=WLok@n&CEmO8R$(`jJ{9IxAa8v+LfcM~(E&IZ|Y_ z>9}LS{)y$PwVG5)IKb5ikV_Atf{N5+9mw?KZ_`AtB=p8w?#p+_)@UcN91KHz+ZIyvcnq*wFM-bwe2obquP3ed&uwNw4>I&~ptd_UP{GIoH8jqoruq&o# zBOdUW7wj+ zAtiRd$bHVM_mR`2Q$a6!tnodE#K+B-{j+!1b=t=;oxfcAO(D;`D)+# zPqS1fyND}H(mdFlwNKUr4_&$d9_HpRk&Rl|_=1TznJ5aotsjx{SI$of`z);R}-RVw!z2j~q&o(nyZ?V-jBuJao;HN)^@hY87GI~f*yiFi z+dd*A@%elTuBgUuqsHyZL4B7)iFEI#>Io7_{R(3#cB@q)$j#4$lRtTG^FSf&YO|PQ zJP+YEDpnl?$(#)&uQ7eXEpE?fMJ&in+1{2D6&e(iHQQdxFw=DzK44O^?v~fp3q)(yP z2sv$9--1*oFV^x|8!DwGtGP%)s6&PXoy zS5Jo0vE@5l^D~cqYHZSpCLeXEZgUC=t&x=IR6p;zc8LBu1$HT_6U5yAHN`S#lYK9! z2(>3RE!Hf}9Y|Ms-|>;AtoYt_KmS$$a~08v@5BmJWF_Af743AOh4_?!wPZR|cM~F* zZord=K0-Nn-!xr$?OD6JR*Uh!2Y4uWk*OzLb_O?NUTNV{OGZGa_ z23h(?OA_Z_{irP|bf;CcOPGJSB2o`knsLaAnI~)4*;nA6hmGWK=C%`oA$C7fbR-7^ zF!m(C?5S2q@=R$(R=1YONreu{M3!Xyt+-!#j?wv?s9i8zic_o>vFt@YQ9tzT143$W zvGSv^@hYl^-VIjocI_I?H(dcm%9@tJBt8TJFQ$j-Y0sD>Y~`d8TDvwrxq_@P27kO~L^MX1b#zIL91WfUR?63c|U>LzlA-RM?ZzS&>xFP*TV7YY=q zjb^5$Zo@76SN3`!MNU22rU!?ai_R(cB6Irn+9e2$nDOh zAF7jKC>+^>@DOX!o;t631ldVbqOe;iw#-oT8(o$2AMnwRcDc;h1H5A9#?uV@qgdAF zklF*-Jk2Ct>-D#%CH@BsBvX;D(6A zUCepIp@b*z0-JXZ;*UcvfbzpqT{}=?)c%CPX`HQhi0xRi4mHLLZau@t=~I=KQ|0`99}xX0Pcvx^gLwab-!@!W4!B&LaX5AI ze)p$RQ&7iyx~KuOq~DKc_S$wx3H|Q!k5s}9WS3v{W$Z{-ZSO#?4GN_t-}7w|h|93~ zp`{VBLqmDZCmLXQg}C>v_5sz@(+{QUPhZI=e11@l(%YywsOPrah5rZ*6X*L~+vel| z86{GmlkE+E+g&Ulw8Mt+_kWHXv<1++TSCnmy#_|8hB>6uYCjamM zxcKnltWQAbE}_{~$-%B+Yw<7|wtP+sR$rk9d21VuHOjr^@p8~`6QFULxY8THamNJ* z2U~d2^J$CaXW)WtHjObUoeroWrMzmX;7mbSSZMXFrRPNO5SzDVNJ3d5C~*JJoV6`Y zdMvv8v7ysPZm zL59kSt3~8WwQhz-jNL}oMO(G2`wbcf`kZ26yYSL#`x9;-H$XSR;Qc9+uvI0{VetO* zQ#8UYaOi)d6JKLof|`CC&guiu-leFh_1yO#w&*i3p%+Ed%Hk-xP%|v zEFLm`+rtZ2*CB^_e2&!Pr<}?}emdq_6^j_q^LhmdO2OtRuC6F5ey2@g8)!nVo^ai= ze^D2Cro^c@te(8Y2$I>U%2 zg`t*vo0RPXTG`64825X~hN$;;LWU~p@K-?W>RE;#HXgkCDHJj&^5(BJlxq7xFFM-^ zClPV~?Dn_9`cqk@S9vG5*R5bFO*z^|SC}3<Z$IsQx~?vIkx4NUUoA%cK z&mje*3vacRoDKXmM#bl6>_uNd{79l2d8kq3f3Vx$`3r)(W=ow^DegyU&Z5Av_1p&W z&G8eX;xfG~?#p?Ds)C(V534vfl2nj@Hcyb|pzf;51o?U;n5$Bm(jL-yeWQr(g!>lP zQRCj?59iSPG=l-D;sN_YL>&4K1@5IN-0`RaLu37B_ zX@_E%{O|(Y)@?x6NL<}!1%pkOJobL9sX3FS>QooW&n`QV^4qRoUSCi~yyGSpux}v| zu>U}(AhU(sb##|X!2#^zttcmowm&IQ18!$lp)LbTt#k8EV=FM?QbC3re*$zGI?NtC zqs%u5o;q4OjT|AhGEv*prx%BAKP+bJ0{{TT3fVKF9K4kg-<_e%#`6Vemj3+<(8+e0 zD9XY?j3B3?m)>gVX?B)#-xEiRG~N|{G*Sv9><_SCwfqkFAr&Nhe&5%XwYAbMnfkFbxbP1cvjw9owccLY?R;jpcZDif_=)jQ1#V3CfE zzI9oGR->G@<2z`KZ$hhBB0-6QCMoPwRQCW>omZn)1mLm}XZL;-kC{nwLbLgI#Gt6| zmwC^B(;8-w@EQ^De%ExqDev|rPb?4zEaF&qvA@j?coIR_!}EYu5f~JHVNlkN)R90Z zqsgw!4@XA6zc4s9%~4ApH&-W0^^S$9QelLfB6nhGrA9uUd#t4{=uAS9KjxgJ(CZJ7 zUj~@c`&^_WJ!EFkoH(J>B5k#F_jk<|i$cizOR z4xd?HPISj^aaoWx2{yb0=t|>GryD5!IhMZ$#q7AbOgaxQ&UY8XC=|dkS18lxxbXDA zHRc0GQ4^l6HDs!@S&HmEW4PB$Rs)Fid2TM|wa)Uy`nd;rcw$*JCU!nco5Nkz9IoZY zb8cpzyHkatI^NF%Z^aI}KepZNCPn*P6MDlFfN$H3`%j4DGY+2?XJ7S$@ri3 zrZ@2{y??jrQBrh#JDF9Q4sKLFrEltv=Sat+O~L&HYK{ich`n|PR50VelbbQL|v^2KM5 zN%N2sA2``QeO`)Di^P#T^DdcRK!swbx(0-+aYm~wxQ}j!pWB)V5V|Mx?B?ThJqYEr z^G3_x`{LDtwQ|BC}c4*kEk z6)RrMDh&EUlq0|8XR%Akb)C6Nsp8{SU%G{gxQonctP?e)(x1G`qjP%XR+?ei9DD51 z4yu=n*V&5M$}R*>Rpbr;oRIw&?rSRM1Hor==VCyEWuw7yV;OMB z^6A-Zs`AXuh5|rHo1nnQ&(Op`9^e`GVQe$F;aUWL^Y zT3;am2r{S_Yxk#fQZqUKB`$KigZwYmW3u?A%wpQ!#W|wIX82~5llL%v4hY^S!E{o!vgU!nG}#GU!r9e6xHG4dm3UaX0qrW5msNr5w*p; zIm16+GQKrb?y{rLPy}#kr|DX|wUeWxJ&P7x0ZaC)&DY)da(;pVl}Rk)!NP$6+Jn| z#!$C7ABQ``M2!zti}OogpPe9PP?=hGmLw#p=Pr7tXP`**qSjW#=;35(uoh7+O%drJ zFAIq!idO-?z*q;|7eb0EHOD5GHdhA?s9EMIHMHR&XnB6PVFRo@>PMLcj&72TTPAOp zT@^+eU*CJc^7@h9`HQiZp#B(T(Kq{R90d|05C+Lk6gn}H+X28zYp+KsUCt+4J{tB{73R*f)8VEDEroe^k$?srBlsM#N@qK!sx1&f%H4n&!_fgM#?0Hspc!7qvp-na7@K1}Jzjzv%;wVPM1|+Ivu@$>bfbl>d zonpEgT^%9Pr}Hc{i%rCh@4Q3(I6Ua1rWKwc}NIj?@q>#*ZI?-mv zwx?daBmKD2+6GO$-?TvjUoCb!SX-uuP?)K6fENp{Ke1mG@;!G!i(!3XypGQ40n)#4 zV_p<09ko7Zi!fRg>3D6vW|etVk!LGb&)v3M4lG~=F!vohybf+C$oV~}+6XWlER zIR>?!oeO0ZyD1h*_lMTB%~THae`Qfy|7d=Bg{QV5+j_dUlnjdTx@W!;g(4Ic`!P5i z$KI@z z<8`=;a7pin&@yibJ_8Rx(?6bkFEb+M{`euBh|hXTmJBtJK3sz5<8t?lO%D8SK2k3q zhFARNEno?Z?-^s-aH7()khZ`xmB&tY@&Q zpjn6wIq4_x#4h0i?~dI~*$(P@H+QExFp?5s1p3v@Z;Y0)QMWs-*I zr5NfwaJl`)6H&s zp(XlUlV`GQP&*(bMC#wuImk|81@lOU2L&)I99|H2uUU;yXNU1@Z=dY>$T8kRV=Sl_ z*xSE8CHv!9&~AS3!{rG8!DV-pHF5_5BLVDXramGAkIZ>YZofazW8KAxwYbOcm&@Ko zo^WIQ#A7=%wbSk&O-Q>@Szzh4`^N_tRjbCD#KdtvG>Zx72)K1fH8E`kS{>oWaxU0J3bk&E_A# z?PbiNpZkLxDU?~ApqY_-LW~GkzpRP%{T2CM{_BEkt-fz=J;y-V)A-Uv3zct5jwn<>XQN= zR0FMleVySnS_Uh`Qahgl(P&7`Yh~=z_4uz-*gkwvQebkm6uHw5^scHaw1cCI88S&N zEA9-gm7b%zn*j$@%=ImUN z6UEpZ)7#TpXJ#F0x;jvT1MndM^6ia#C$y;Vhi@jowTyF{BktZ>NTd|K24!;#hGF-{&0}y# z33xZM>YF?-0;SE7X`Ae)6{v`7bX-+d?lHXg=RpsV)!V7h(lfJp`g<`Lc-py-PZ#W(;@~zx>68IU8 zP;;s{Q6fzpMjU$_l?|?PIgtV#S&x0hgs#3?gv(RL6fy-qFn|)9$ml3p&cX4m77MGl z!*W7!n$~$GC0^W$PB+CpI5xSB>bVUQe(3^*P}9TB9XwUKc@*X=#I zI)G}PWBrcQC{jmrP~@b9M98^>S+k@7#>gph(R1_2?@LLJ^5yWAx{$Q0uRzAr&0 z@Y%f4Fud=XCsU^hcXWT=opl+dvwNVHIq;{otS~9~FuLi>?bSNfY0KjvIr=B$c}Dm2 z>23Ry#-&gWA}*TtdwK;Es3=?dVFgLy%#g1fE(YN#qi9AEIFZlF*i}%X2}knBDl9hg z3k9)WVg(kyVA`S&+C>UcE7f_R5s~<`RlQvZah0dm`qwMu7zZB z31-RiqI5Xp8(|^K^pD`+x(D4`HhxR3?Vfk+$jEUhX%YWR6Q&&@*a36y&GXN7!$U}* zK$cY|_+~=O$bB;n#WMV#?Hxf(75^XiyD;t(%k#l5_o#kjE}@+earP2~@F?!&*h-G7r_?o(k60hMbZ93Ma!_Z~KIG`})p zR8BKYbIXe!ScL0gGCqwFT$3lC42oUXNtLz(R#ps2OLXG44GkouYHK2P9j(&QTb~K^ z9=x;u;ujC$^K3!sJ!3XbZ<9yxm;kevBj!%R5FWp+^{sG~8s7F<*FF-o*ObwJk+fx zy8HLp5(ep?-yei0iEMj$IRh!BwcmkQ~^smxD{8WR^(Io!&8pi64*q zV216QwokCXIhl{dS6gV4@-JX|NWh{4cos`wt>5P7e=)!h60 ziXmB<2*Xt23xssHyhz;NbbvJuANiTRZh!BMQg&WnA#&j7Dg0G9Ck0b}oEry|bU%Uh zFK*j-srBFN7sL->eO03KKM(GH1n$0{`4n-+>*0m|&`sh_6?D>UZa^p0b^JS}-!s(Z z2#V+z%yO7e1^GQ-15>>Qj_%#zRnES#0@W8ws99>c_x5f6iR(Swqh5ppRdk#yjcBmZ z^`&x7=0#rr1GWSMw2@jga^HtQojCV{E9v_E79e1rd4}H}mH0Ga6WT+Hes^GN$zz~E z6ZnSbDj35O9jSd!SeWwVMLstC{Y@huFxJ1nQJ3uuWB(WqL74Hg354ofp!U#{N!VlX z(F5~7I{7F)AhAXR3pt%Wav25tyG<^2D)K;|d~yv+fUn_(WdpeG*YNZ-3cb&K9A)9_ zg5T#IsLlCEL5Oz#Z2ti#Z~$|ZdSBI8hj7*~eeP~qPT;F|JBTuczzl4rAgNdJmws-t za*@!R#%sl=*cNY=0lS zjo)(>Tc0_Tdh@6cC5>xI;tj1!j6EV2M&V#6vu3b%n+fFs+-KvrrKI(=J`fH;cXC5B zXJTyx_KATUWeF+V0wF_4xyV%oia8A*`CeyS_G&Kl%FndNhoM(jb_%)RM9mV=n5DP5-+; zKHD@}9~S3D_*dgVLh74|j93~8)HkwhafYb6(3|Ps7-b5`Y@K2$XnunRR>8Q7e@)iI zuhb*>9R6XFhL#L^$fTk==$p?o#ijZhb|Td+-Yg*nbHbL7krr-?_KNEQem0s8^CP&R zDU1~i_Yo+6{#;k#WK*G+v@YPC3`^2jTECj|0fo1I^C2_>7(LH<&U@2zptd|F$~l5Y zhqGmu%xhH=ag`lq8v1-%^`y>NF6ZBGR0+_dG@b)ZM4aK}KuT0HfZakYJvAqJm$a-> z0Pv%D&=E3fda~^ac8B}*eOsJ3ZPg}g9Yd0ybZ`Awmz2*QXT;c}aE-6vEQq4-T;Bp zS;xpGT0?yvGTg&0e%Zvfp-j7b3thvaT%Nm7b>0%x(%LyfjnbmI(Y0!$57MCyJ8*G($*f=y_s?m)Y@`;F z;>@G9JFM3fG#(L{5AbL@1dYSF0`9HwJ8g-wvz)t9WEpXF-WF6*uSVn>)gKS9We@g5 z5-pQZcdOEcbfNA55p$BaLDp(Nz`g#q?_(3=GEqGF2NiWOFgEm&&4q!_NoEgJf+9Zu zL)}0<)L=vr==Z_Gm@-5(49>*#^GW3fSj9h3sG))F^#@a<)=yUcJA(03n;LrpCiD3w ziJ!jSIYAVVloHn>7iz0xoG=(OBBi@{(Rg!x(+6X$iuhN5jE_SlrXWFmee}to=@Nsc z=d*Z6K)?s4!g_bu!A5gFcRS&Qy#6*vg=4*})l0(dbuwEa=2j@V7>2@1h84N4*B z?}s=hmrK958Ik6X*0Y;5l&n>kHM@T$1=-3{8@>upe<9s!f3`0*EJ`~!K;%Lio#g71 zT<433@o0qG{ZK{%lis=wYz~y_W#gx(ll&6qH?{AmQ5~0t^ni3eH`8% z_t?7Ppr7BWI!YMKqCA(KCfFv`S}*2AUE&aQ5<$FI1HkvVBJs7}_<4XQspmdCGn8eq znRX1!^&@sJnRYLJnrkX%c06&9L&c0nzveCr*d0p35J2t*w2-Bs*WA}!J2cFUeyu?B zXO|YEqC*aQL+1y8a=?t9izb1u`xjE@ESflX+SB{Qy41quru~za-T!`7#z*RXt;RM~ z{I^wx6CGG9PTz%S5$vZys%tN+6-H=t>C#zcb@IJCi8m(F^D(BlSi-eSu_pT^*B3wD zdW(_MKn;D@%v7H9gf^||jYgC{zfwy8P3qF5_->k^i}D!$2>hmX^eY0)7>$b^YfLIi zSfoX4WcmP-5JA;KKGWZwww(2d8q9~)4px)x&P>kB-H$duKK@aTvUC9$k-q8b`E^vE z)|Sq9hYP<_I7cK{*xSn18Y#;0_|wzr<&b8nh`R`Rtkto#2G6^wE>b`s5wcDhplq^^ zS?cshUph>oT155TC#Y7L{0&vMQCK-ps00e4lTZ?@IotRpMXrmFd8-!M58xZ8F9OMH z(+L&}Qbe2AtA6Js@cv8Is$=BbuGQdNJ@+F0FRzH<2hDeAQ>Q7s+2E*-h?M-R5wL~D z3lEpWv%LL40)BF^dNyF1)i*^FK*2thgsP%x&A?;Y7qLc<80ow4D(`g9Wa|+Lm@;U; z8ZMss*QW84e|XQKQzct#S#9`!5^(tsCt-tFLs6xySsq>1HdkSb@4oT032ZJ?0DJDL zNfgA)r{^gY&?c7|r=5np8`!|gXA(Q?>yz9u;d`d0#@gk~pc%X$_l-Dm*&MT&=C$`T zDh6vIaU|DM$i}tqVmD9bKX$TW)Zrf*{M%ivCl3;mo)~_xJI^QEnk*YN(G`9r1E-7T z4(E(-z@+aiQP>04oXWVF4A#~uI;Po%XzFJvAG5{T;8yyYl&4^)xdHn;4bpRzB1Hv~ z6%ynP27twuJx@iq-Gzl*fktN1jJjI)AAU55LTuFFOw)lodFoF*eHB7*tHw>tky@h* z*>zrt*&dp@uEfSGMv~tMKcOK8Ttoc;tJwh}e?l?^4Zn1A*zHOW1K&Lr%{hBi_6+4` z-sz&KFQ`p^gk~Ouelz4W>8e^xm`RBV6`z`2Dopkji+;^YU?8&r}skR|2OD zDUxBUC)fi{Tu7M6tF65LJx^rF%R6U`77s*)Vl1{M40t@7FnmqTgC4v%-TMc9*~59e z6Cney1lhKthdl^cOl5rGuCPB~a^FN({|gRcseIdui%UeLWYhRji(<1`@zkTdI+_30FJY}gWvtsb^&H|}vt0Wp>NhMF@I1es` zQ|l>H*k&|e3acaBxkpQNU+Sf#@!1HLvUa}J{zS~GX-#49W}5z50TphkUkbG=Ynp-x z@ZE@(M&2J^KAL~8v}4wv#31Ra){~Js!%e_akL$oUF8uKnXjDUQJjh*}95>W>GeBtw zUiOX>c6mM@CEXf`QgYZ@&{jY5P&I)KUw0mHckMlAM_sLVH*Q3ELwNZufV*P#P4R9SV+* zeaiN$Shu!D$>H_!1EDvZ@$u+~Y*YD|7bjGIl&_@iwO$r$_Qwhc2uHkiPH6`AUYREC z>oamM*_Y01T9vD0v3wvbetIrq4jk^xPYI((7JZ6I!d8M#oAlxIiC^x2Ex7Hurj)oJ zCAoXRchBqZ7Ov49cs`hEHOBvYNo6p_pu;-v$-C8cd*XLZ+LL{4?#>mie=ErAeE^+X zT>Xse8em)_d*J%`-h9AG!RA)Sm$mCQ60v$~P2Y(CB z(zJ*e4g<>kr#|1^>*8_42Ba71Q5RAMAct{e&9Gs$SxNYrHGk^e+vY=#)Nqo~4H}a| zH3pZF9JxX-lE*pY^YA|2Ks2_6C;24MBj>+8hb2DG28BN*jsAL0-hpyj10=8YXZqwV zjE9?`R}L-YxM2$}sc_A83%N4wtdcU}zf!)r+cEfwAv?!Gd#C$$$8ai)&&6VEPaSZ{ zg6bW%)qwpaq4aC*bufel*H0WikPK=wdD@Efr{$U9xW;(ZQ*JVNnl;w`@%IPkOaSJ3aQA=Bu&Y|pS;3YsxFDE;GR!P;B;PU(frK1V1T zJIZ9^W@nBuvXL%!pn$BgvSF8Hn)%7B`;+#3w@|&~MqUxeq=3?5?cL-UQQgg3dZPLU zYQ4F4Z(MqRo}%vI6X#l(>=Xr065qXpNJ)|I(TCGLf?!5_)#j+HLe~L7Y<9CI?hUbP z*GCb`BA2I29QFNMz9>yLM%pA&lSWtPcGbxrrj@)&OAMM@@3XrvmDKzUmK7NY3*BH; z4`*g;_p7sbmFe>XjW|)?y|9+&oMOc6N{4h`T7$JhiJO^9yh|X!GB}s;bR;}0PD5R+ zEg|ol*`Qs~v~l$tBasPxDkJ^7EP{l)psE%z#1ZVxUj2A#*zgl%wS_UyI9jBpmsR)l z>{}YNor9tyBDTusD8;sRWi}}B<4H3C>wi-g6!gSS<+vWT5fZB){<40&n^asz+7MDIBGKh;>)(@es1a6M79prqz6o+ym zReT^v{V6ddZ&K~`;N!;B|^8Zf*zYe##6*f=YKl9 zIu21Z_Y4bdrw_G73iZdFW;>4{7d6T0Jp8(s0++5cdpiAgbrET`;VX|AlrvLi2By_Y z2LMm$jZ@`yFTSt3gK0~`Z8|ioz{Y93(k+o#ON9P?dWY;AULga=)5a8jyDtEi%E4`) zrztY$Y}-A1vI;Afk|7R&hBpwsCS9!Fg5Y0$dNdgndVZ4m9FK6{#L(9s@lJc-KL}90 z)Ntzz*ItGAlPWUqq(EfHihFdNg?4?u*Ca*$Ej9*`+N;HIqWw{H*#}Ru(1+G9L8=#82@w z_jAy&5@z|>-mVRl-f6O()uCDLTufT0M{OvJLDBuDV}UB0?d3WD=sqs($Rs{z>7pqiv!gI-deSFGS*t#Ow44PgM4<5y;{XGswA$gi1ioRn-3#Jur;KFY_ zTeCY@-kZ8gTCKSemWB@2%v5W%QeBQQBG3axwQtX6dOK?1$1A`@uVn%kJU1xFv~c^jPYXiOjBa zDH*S2xY0|jv2RjgapvuJgzxTs#nG%ZRg|)6Y)VGK=*d@1X0#2Aq@xR&Dz`b=p1mgK zHy$tI(6F(co2iw(*&J_npAR z116P~=*8c6v#lmjqavNe3XCIeHYKQooZN&Lqe93VU*g%R%{WnuqfOp1SE9cemi6c& z@15hnh^uYlnVbk?$#EMeQ^bgH0_;$`QjOu+XU2 za6Cf%!rU7+WwBxGxI(pNN_Z!G`C)6#6@k4EK)_4|L448aeORam?@pRwz< zLsA4q7~3n$xeo$R#kxO@UxyMf2i;d@`@ zrL_LszXUZ|h*pn$L*l~ohC*QDIyc_^iP<2knB0w*1Z+r2-CMd@YrbGIe?i;r^qr_{ zfR^&hl4hq{j`X_!1iXFfsazQ;Ne*?ci zJg>-LXCCzUb{FFSauUWXWl9|?froFV?@+(;`p=(dXfY^~&BBA|?#FR|09k<2P)GAm zB=+qZ$M@y_7+2aln5gEw{;uq|>qKDAHHE=czD3|a zvE*88{%3EoDrlW(v`(JWl9x9V4$WhAU{Fgp7!fxM1W%mnE}~K{xbND(FIOAGFNvB2D{jF1zeE+M$PyAxa+cXxMpcemHsd*AoYz5DF* z-S?|kch6?7Rddd&F-BDptSB$>5s?590s`Wrl%%LK`1Jw;0+J5?1NiE7jo%*p0_mVE zAq-JIMsx^%0|OM26M}%KibQ%bfQ5k2bCePlQgzWiO^4IR>b>vRb5oG?RdqXWmBEZ6 zC#TK{YW?)9XraJjn+M32AU~aR&l#-idCz z(h9~b!Omsk$?M5)z9HQ8*2<{Gcx@{o0S^hU!`bc`?>Zq)L1H{IpE9ApIuGPo)=Ri8 z&7B#&bw|UX`=0((AF*5SMS9cneG$Z=!&; zzeD0-`T{0N7!<(E5+W`5Gry*~I;0>p0q*Zalu(xm`>Zm5CyJR^?s|ND+?}H>gDF*1 zB}c2`7x}{rFr(b~io8ZMyKW8X!Z*6Qs`HC-Ak>GowSeQ7C++qvyT4TjVxmM~0){Pi z@91c_*PCm!qnlt7+Ilc1)nW5_TfX<(Vvlg^MHi^BkcwXKjcPQVC-Cj{XU)u6Q&4(Z z8YLgY8C|-7cbnwP+S=OA?k;6#fB$|A!M)CL2?@r(UHH`YoL)9 z!Bm(X_uL)J9^09PE3y&5PR4x<*ci!VBlSqOMk(W!%Vb27XV7h(GZ?FvO~7V8!Px#H zyxkv}A5CGCi_fwa+rmAsGap%RwJgoC&ijl1`6QGaorkA+B8^LL3RS>EL8sX{w=mNy z;N|7T^yXkW52NyhZ?1z+q|N=TF*0)U`;twsPCNZ&pM%!&hk?-Rqi4wBmI!1o!j780 zYie5QjsPRP?GWuH2mWEtFxSxGKsdLP1*WFACn3i>OS;gJaJRb?Igx-BvS+@>oo&bC zvfP9EW%0>Gy|DtjZIXNxVs1rMRS?Knf&Z~K2#dV?()I*AaVqRfx+;`SJ72nZI6-g1 zoK7kInS^deu}H6DdVvKS_P4TuTW34Fd{lv#q;bA#Q|g|j+hq(J?nnfSWe8Whk5675 z%oVwQ>wysG1*MrAjfz)T>R4E^K|k-$bR4JUx_fp%A|vNLpSk;>knl=+zdZ(mw9CuO z)8><}PfdooBMl96%#P>i=@y6(5fHN9UZ00AZ@PT9+b;W3RSV^br4S=b^M`?P0b3fb}#3WCEBP9}1;Jc+mV$fd79Dqhb2tC-`qr+Vz}rZ}kTCrNApPk=pHQ!h5q>VBVH(f$_D_PD2QzTU={&hs;3xnu2S zjAxq58j|mNr!X^DHw?Ww<4kbGvSsbQgOrd@U2*SVQh zcQ9M57AUe_b=$Ju){v5tVzrp-XEp4RW3^cWEY(?1l>r)Kbgr3J*1UedJ)a9kBmF7D zBQs=m9h%DCy&9se07SQ=mNx%PIOa~UOIX%u016nj*e&DP;H$4p1sMaRo;9@R6N~(h z=gNl9R6mG`icZ}eo==OQ@^#ETd_G-ni0pNbx;tB?NLK-lr6t^UbwPO@JUTwzn3Z>` zm<`2o6dQDl*BbWP*B20*ZuL+dR<8%wfIzBd-Vm6*x{J zEh`(<75-pReki1=xwysNP5E}&e}1_7vv6|RMj`ipJyU3Ma&nI)W$f_$TmgKvzgn%e z#6R6O%YO|D3WB#Q_I@&pR)t3^&Do+EUZ@27B`8!hVkL=4lI(ds@9^a+lC3ia-Im{N z4_h)REIj|Ee^BcWv_(P;?_^j}+ke_4(dGOi)Y&6CXcWRZ)?88E>6Uey@Z3 zE}7?E#_hPQ*z)jR8enKhPWq0N!ux3KJR2$9pPXYjS;hck8432F0I?3}-_KZ>nB8g* z{L=FBITI6#RRi4tAK40daA9k$wCl{}eSL*yij+jS`tDA7G%C$3HG_3_CCy|iwCV!N z9z3L_q@sDnZAZew9JQLAKWeYIGF!rCi(CvR(C6B;>j>_T^#%j8iikF_7g{Sb7ngQDw6A=lidEnzTERV&UM6K7W8$M86NC?(9J6GWCA)Wrg z?QnIi&6Du;?OA!L-YONI#AqaB2ojsC@8Wje;+_?04e2)tJau|hVtipXyE4J5)+L~ z{G=cLY2)I;R#UALgt+u*=|bNRZX* zsgE%GkRYBbvHs~gO5pXzVQ6#|6O!Bq99`Ajaz#We6f1OF^U0FvD&OE5He&YeglK-u zyCh@bU=gU0#MT84D18!akp{)XWk3tsw71%L&_JrA-vpX=u|{gbk<-%6>}(1rJ;}gB z;IMH&AvZ(|VE%l5d31T&P0Tb7h%f$TYS@`AF>=?q9noAQ5t(6_zRBNUxv2FONkM_J z{sgJW`IMX0eh0o7FX}yKSXq_YrMPn~L2_Vk_iM_{4P~|xrnq9ZOtm3lazoWjf!c0uU z{6sQptE#F3FE}^4St5f@kw{>~3Do`+*l-lv&D2;%{rtkHe5?T(?q{%1&*yd30ug#; zT4Z%*(*as_=3;(us0IC`nUY929txyhoUuA!{oBWr!4q}N^P9DPRoMRgF zM2e}Wib2zMnGjU#1_kfkt_w;drm(t zw9zIqefO2-*7Ms23^)Ur%0c;j`!y^5Nev(mGy>hZei=AC%(In7meAqC178Zkoq zd0wgW9Lfdd05p|pq^StSPSlMDj))o8nrB8M+RFUXmC*?B=w?qM^PTmVT6EJ+Vg0ZR z;=V!?WyO~Sl{4qj=rZ$*_M~)nB-&f_75(;vWQ^q^>!=!gPO3LRxC%Jz^Ih9Z6=hWf zGCCLyH7u=GMM5UTIjyETPT~`!VQr!&+pta-ezC>LbBZT=2K5%fzi|79zUT1Lv1Kqu zrLdCJ!5l8JdgAmMTh2d#wCQ+a;w|hy&^E+zXjc24=0A~p7=tco%q`N*HT&-{s6-Z> z>d${MIwDMhI_WjN%`wtH-$Ai^u9q6NofQ8ahK5#g#;rbs%=}MopaUEMl~Dh4r2U62 z*rZb5jQx#j%7)9jPv?Z9y09Ac5UGy3#!*)tKubBthaOsxG1ZIQUs#DB4>lx` zq|yqxBT1sa#3ww*zqc2oi_Mw>uWCh&8jI2hD^IBut&$0Enyyi_qbal$3CFt{+D_x> zcuuqwY^{Yr21(E0OCxw38Cj@K<;ZE+Fel>4l6;>}qB?nKOroI*QRD}iJhRh(4P9-7 z_3d|YC{6{b;TPNl_@*GIPsf4|@^n4woI)^R@VP$W>r{Os7(t5%(#Ox#SRvI0G=?xi zI4Rvt3_o&%e9LD;VM;?WWO8?VJIN_1DCM%jkGemWj%gGupalncJclnVF;V9>2{8S9 zFeLsBQ4{<<33xFtg#(+uI60PSc_w!%cfce1=Uh!V6&_b|hIkT6t)t7#K>gz*dPb_7;GY;WIa4Q!7DIvU z6dzfpYg5cnE1X@_6i2NOHf)^|_;VsW@E+>&{U0uq{>XPE@zuLrGTuC&x2veB6>!}t z8yXrO^aNu%-koSVjO34`KVydzawHTecqd->D%;y*7@L?RsW4K0`V`&v=9b|aYWn)* zRBJj(J~TGQ-j7zM+Ztjrk|bX!pU&K$#Od+yEuG6jYYr1*i-w&&VQXtl>m$enpkQKh z*vtUTPUju>lif2L7q zTEWc?E4T9rClZgJvdXlEh9;}Y2u0ev!S&jgtE;QS#{G%sZ?z%O!c z0R$Y@!ZxjJ;pVd?R%4#N`BWB*~|ch)@(1WFz3d5!V$E9Zsys8cluQ zVW3EK6R`<{T%0-U|wUFGh8 z@CSU8$-mS$5QUhWgJW6`vWM11Naz!*^NGe%t2@W4`}wNJ`#WuuO+?RRu7p^iudnYF z7wt>7?KTj+dAB6my9l8qOj~06a71KeWTv;awrY*~Bh1|IPMG$Ko?h;j4e2>K6$#lb zi|!VInWk@V{92NdS$oIFl*gT&v$f{4FRX32T7o{2cX!LvwJR%t>$Q}6+fCszk-+*E z1vV=%VBOP@)2=dXoT~wHaEbr;QFM3OfHXq$WpCtBIW!19{svdqMc!h*yspo1Ry=|z zTQ2PZw0C++^GLu}QW1{Fls2ABvFt<7)6X%{(z3#KM^c+D5_oazcJ)|3`Qal$)~d_) z$Ch!qrHxMLI$!9DX#)1r@p&}oK{fyaC?>|@bgnG)ZYNg36buYZ`S~+i!lo9W%tZo_ z=fF^*^PW#ILu$M4V_lo4yT#?@Wwn61uE)ChKr0f`ABN$H3A8e|TQU@1Ef<&Y95ig~ z^2SE4VJ|QzwC7b`Y8G z=!JwcQh2#diXPes9QUf{_cdHqQpuSUl^eWID|=Te)=Xs7lh4Q?G5o$i{;pLzyPxSj zrq=pJuR`sHzWTiV-Pe1ZxoQOom%&ixjh$Zmj!JKsdMr`zM_u{niq<;<|A2tO^WKJO z`HZlH=hMwid14;tJYa@v0vI6guBAfG}`ZllkM^jOtTBE&iod5Y(z{lq3RY8^Ja;-W+yJKB%$hRCo{bnDDKD?UfTjIb_b;&yC>@_4Z7W|L-Hm%eEkm3+c_q9&SeXoj^7Ht|9Q)4 zK5Gh0MNB>Dggf8g>uC3gEU8M$-1208Y-*R zCXrultw#OUM=Oo7C(yRuPdA4m4eR_V)~&Z{4xsP=x@(vD6afWoO0xD-Js|S#xIn1U+ z5x2{Ac#40XueD?r>v!f zsC}5W4Kbw^9ueGg*2YN2D=bngWJC6PL86|QPy}>n4@0X)ir4GY^hh%4HKUMw;=+}v zWHzAAoU&neT=D%48l-$ke!5)(`34rjUFI*&ZTjNneu9OB4Mb*zxk>giy>{Q)@9O0M zAxtXyed!~(!#oQx+kM3Q!jz-8G0X@YuBn2#XXNA4;Uyzh#aj2cW>fcA+rz^wO0zoy z3x@mk7u_obcQI{IvwzgDUz_LBj*m7=&@ay08ROT0AhnYuDIXVEGZ~PNWt==$tcyah z%*d&togQ%XNwnwwaX<6&t!>Kn*SK3#b92?Rmio%t{knH1tuISlObj^#2{*1H0fvl=!6GxYbiy^PoxY9bE}O*9VuQ~YX=&XIf7 z;~Ku{e$D+bn2H4V{w+81O#+PlqouL2Kb;*vpO0^t^e|mUP59lGz1YiV>jq}|@H3)eB zz;%OVv+=4xw(6)x!Do)69}>N~M~yoDx$c>&(+bv-Kp+Reb7VBhcb^;Q)1_*pH@-|G z@@7A@3s-dhhoe8PV8SiOkX6tX5CS*vFP&qz-Di2K9TG2$O6NB4De4LW>TNUdd_nA8 zWJON9^I~NnI#)F7K<@2fJ91GQe!Je26bq4IYPX-QiJZknHu1E4tx3GCYRl^9kSKR1FCjyJC`1PRSq~v z#?;Ra#ak4CT?Q7Rd%_n5FR4CSBg33WPQG3%i}kiTWEt>@7F+t`us;?{@ZOEO4zy#- zQ`bu$vD*LH!zbQZszi|{9tKLL{gr<+tso%t@VKvccsswoB`l!a0s&gXySADxr~?2M z6{F~sl0Mz70FbwLr@;d{=J=(#?SzGeL9awKXRdbJWp}@xp^xDsq34Y)1pVQL@4y(# zj;EF?G$$bFpy~L@F7^IXTV%yR6^Hc-nbl9^SbzWTg($>J)eeMVwS zbtLt0pGm3P%l?|3&5ml_jbF{be^uPWLzhyZ;8^Vpq=KxTa6)Toa09$fO3rW_h^#D)Z-LUPqaW|U7AcKJiJ&qWh8Wg$T>_U|hPnlMoG{6jt;RYr5 zPNITkDYwSNQ~Ku$QwpuG!h)Qaikn!b+mTuQSLjAXx(y=^p!yRWI`&ePEyDYKd-BOw z0b%fQ&j+I@sQ@8FBss&CIt$dyejV;Y+w#1;AHfpmoT6)u4!HGm*%*OPCr3x0TkARD zrj&|nt(Ft$!#P@Srf8YdeoK9}Jq?BSXQ!txTcL9_V>O+KI%|8@ZCit~yVdi!6uYxE zp9br=u|vo#t7@eWDy3h)a@RUcmP^2?I@S)xfWxu`-M4_djis(-&~5_y%>V`JbP^M) zbjQ;`bOV6dbq@?MaQ=iD(AwDO;V*TIl8j{eA*ClIblwX9u2POq{zUwd?&$l4gp`n! zL~Yf-JNzD{$(1yx0ZviS0E)^Cxc0{N$hNFko5I6r#!@-dMPElbu8jdMcPHcdXj$r* zb7*S2Weo+@b&VOzm7aMuh-dESYo0yvBL2P+%#P(@_}7&qrN6`icml`1VSRJN>kJ#FJxu~JBeAO5USO(eUrWp;OP0O;C>V*TocS~{~+=BJn z&tSCo5f_hxW&-X0VDe{>;KT=i?GUH{XS(0Hekc?-KSU7FuKmmDcAdq%WT%)=@0*6#%caB$$(vLu|L}UnA14k&!_7A^Zo+1`F?gM&Zk=7^n?yCTP=BTuy6A@ zohM9X&;^7osqOn@jW;*~T=CfqK&_6&Z>m?LE)^cgrf}F~XaK`<8!YBCylDaJ;Pe?k znnmavgpcmgbm4gA!nQGCM^0aVW?~2=-zrFfjCw%tcNk)|_xl^UqYG3KU}ZJEw{y-QTe@!Bjh5W?aFDxQm%`RvN5a=+_R>+OR;D!+C*ifAwch4g z3l?|>-T=0&&hxd}Hmzb{w_7$>Z$(>Xy}HCK7ikkeSEdadyUy&) zicSS}F?~+g>ya{ezl&sLhqpQ7Zn zdvMAqzBu=xEa{EQ<3TaFLc1YhC9*G^&=efZXSkg$3+pTwWuNbsZEC?WP&$KSEs1=$ zT&J0xd3S1jT>JXQaeKoY#Tr9BSG;lu&Lx5c=vCx-(fPde@~CH#%JEJyPBaduV$4Jz zzxtB_Rpil2YzRxPoZ}HtG5q$OSi4YJ_d687JG=Z|KfBdZl$f(L517h_UMJ;&!=GsI zK6_$cUE^G_3Qw6@nPf6!nO|c|l0xXOUyYz7%y8kmVMc-LS4p=0>aq6sH(IC+Uf1fo z9?VEHo3_V*o|@j-zZCo2tbpD51W%p=!C|p_J%3_b_c*VxaCqg!2STr{Pi1cb>E>`c z6n2Yhp2O{ida&YpjK}8M%WLUmQ7_5pE1Yxo?pt0#!N9ptZ+wzz%Kk_Oih9O&EjSvP zfwy!V*H}{GTCob86t$*0eXQDVkSK=E`hT;h*D{`Qv1QBAqzOJZf z%AoO~p@aa=q%Keyw)deTpeNbv* z1ej*kW3#gl11u6NioGs-ru1dL$N zGVH}GJXhCMJ#3tyE~l)|$_#3~JE0Dq zx-XEQKFWlv%O1l&;ANNzz@r^L0Lpf{x%+n=+#rE54pJm--UCKsRL^z;w1Ol(*BEwb z8H{HBae;81^B<#AejkZ4luDK!%XpcHElURK-@ZKHB=&6M#gNK)O|&>}I>s>sI|!%s zb$TFk70{3@e%3ATRwdiT6Io}-O;K?S+#}Otkb>1$$!{rs@#)jih>d?k{8+H;GqsJ4uZfy1kM{R!Jx_bZ{x^J^A0w*+)F)&eac@2wt$ zNyA2uXy{?EYgLh!4q1V>;K&6Do8a46@Dsal3u#UZnjH$Z`#!me84a_2XYa*s8;0$l zjSOE8ztX^{2T0hto`l2AO<4xl00rRLxVg3J|9tOZTdG}GT= zb|(<-YS_~Z|F@g$NP;^)SYot!NFMOKzrJoR0x~lyIso#!sBZ7qTekuAFoCQJY*)nXA0Ob5}wa@ z`SR)9gO3^wwPv%t(mdBZ{xHuf{Gp9g8_at>?QATxscBGd$_;I~VEM8?O`~lig8-?% zPS`rH)=?2a;=b)$$Ev+`vZ#OnLN?2)b<`j6yA46JRaVB3OXrRUOL+%#+BW%KbUQ7= zKe?;7c}jGw2)`pCkq-@409a1)l;qvmPeZL_!?hyG^aS)Agdcyoo6{}KAvQ)&O$q(i zC2)rU*qeO^pno%FBPR#TD1_qR%7pCSJur{m1}>&r{HccJfX0Nf9YwPJSwuvsX3z*t zIoS{tA$S$=uJv2V>1<_Xk;}m70*pB%zzT8>314}3R+QpW%O0F`sE(BHtVc1LH_9Z5 z;`H^cE=n@ECX8UhpZf-s?9mdBRAwZT#`nBvc#1O*ZmT7y5R`OYg{plO8a|dlnO3ezM1QSMl%Ii$ z-9=;}rDrmOX319)ePzDF(J2{n`K zR^;3ug10>YE`0gkC=R#3+Jzn83ZPgdlsZIVC@Z{KKE;8$gaDjUPfJ_zSgO3{H2$%x zJnrm7P1h5tX3gJqy3UG-aPvt|;oflH;kUyYPB_^d#Ly^ho`?1J93pBKdpJc=N<|{q zJS^D~ku@psn(KYdEe&yj{Qf{O5h)n+weC`}s=R4yC>sXQD+E+F7cUQDpkmC#DPNE? z==7Rtkb!P0*QPL6uK2*kgG*(AjKHPD=8ej2f5h9oQM|Lgt83}yXt+AnPPsbKOv6?} zC~Kk7b1-l;)r9l->^?c<9ijL|{c}i(AGnmIQ2qy*Saox=LBG^e(L>#kuyw$Kq-5+NIP}O@nOgHDN^UY^u)uwFCJFcL{XJE&gTzj zIRnE?jxPLcBx0Ud`f)IlCzpN?&3Ydi)`px84_lrJ0v8s5BFS7F1|!YX!j+LK?i*qO zH7r|nH`wX^^=D6_5iKi3Pe11&xm~z3R#q0I?X+AW6X$1V#rkX}&5SLA`(F-6PKeAg zuw_@dYjeOPa;eUZ$u&3zd&!mv7nh2knCMcNfw16lVfRBKW#TW(kWt$L`R?=V zpbJ~BHoISbb%QA4V3*gsTg5^04=0@QdXhYl4P*IFN9Cl`Q7m=IYH{eSC z@#+2H`^hCvfh!;+t{MVlv!bCJj?;Z)_GNdIqF|k%^!~8c{p0bB{ydc`3*GN(#EE`D zqhL#h<>GiIkIb{kvvRW?OR>1E4^8-aah1L{?aseFC%Wxm{&xf1FF?#`tj)cv?#Zd)+9A9P4XR9PbB=zt46W~ z@flnlp-7fai|rV8{N)!QSZnQ)nHcd%cJRcoxjLv9t7m>gYgFYLU5?6K4l0_VFbrEtpUupX6)5S!u|fc?-jvV96(_20 zEn`=UR)-yIrM1{l^-3l^VFUeQi(RDx`oQWSXzrm@D-UM(J3+>8P;JyeH~Eqm~O(Z5=(4ydji z%imLG4P>}|F>_4(Mlit}^`Q>FpqV>Rxml~tQDWZqXr495+MG}rn#)V@nyjhNofFoj zF*vE*>_8#%HZn&|X-$=JA@%G8M||}`W3;Q~aLpQhhIcCUN^{rp&nU1yU9enVD=ba2 z{}{Eh*F z1giY%hURsnL|}n;iff&>ais2&fSq4)rbDZ*JiDap3|&;XxW9fouuLBcG<;CKPQWda zSN(nXBg9?=0o2tAqEhm3^#sd3dk4c0RwlCaOZF|ZkPf7=|giqGMb_pCfsp?hA)=V7gqaCgLJNt^-n6zp?4#qT^=) zACC(6xRl!C&#qXzp%}KX+nwaRnHdsb;Wm>W4QH7c=zEgWnMpjKtjdr$F>TmSrNr_hU7p|0e41TowzZCiX$nSBk*Fv^7ibb@R*$A)Ns{7HqgiCYDRZ@H$2e{CN8BGk2&Avb6o znj$8ZT##{uiNHnf7{r`G3qdzseVq?Y$5kUJN|>2Io8YP^`=NPK({d7NkyAGS!9t#df^ksL0tfHE{ob7iP4+n_JddAH8%Vd)>}9gRmdm8bsr^1(TG)dQ+{y3UWf zW2`;<_c8kzI(xZx?*9(#$1@ry{$J+MAkooiX+{|6Hmn#;^<1!U%iG}E4b9u*DL>AT zRcU^PwRJpH?#OOjLwR>?o6XMVc}S+ctx-RPo|e&#!xvvxh16V@BnVrqogvd+Uyb-6 z?R(6QV`<^PBKd#P6)1+r+OI?LP&j#!%{4(FBVvIg#o@>li3dd(B#@jHDgIZ)>z}ca zGqVVnd61(}#ZGakvnW?tBp;!Rl#)0;n&FG`5q{5|t9vPCLs|}v>>saE7U~=v9nIeP z;lZKhX}B+)0|N*KHE~V`_Yjqw?SDCAUh1p!lOs*EAUPAcEpZRF?S@)0z7iL~;f6^W z!b#-=mSCO?o*IYCQ<*4LHvEBK?hdywN_}0v5~3Z+{dhb-c|M$=;w+!RrDjJ7b{?YN z`a(&7Vb6Pe7E_woLpb-A<|xggC-Q~+N*)BweRO9KjOysF!fyQe7yQ7OYU8`_DCf8b z$MT0nO()uX=fzLF?uf%0Wi zpkLW^^3G)mH&x04l`=F6wa<7e8$_3U;>)xZz}9l8>cQR*wDb7#>d6^w80#Q5lj`Cv zd)84YXZ-8lZMoJ$ftaV3t)IFa+&mH=bfeB`ft45|gOL1asdN2<`NPf6B-p0oA0H@9 zb|#~*HEW;rnbS9Fzti0q1uw=$rz;E+jTP=rz9k_qjpwZ6rpx1>1gPJM+}vH}f+?D^ zKHDw%Tf8^|NSR4fT3t3`<2wf~(9lN9jeIiUcH@qrK*a6x38B7e_?~JAuabwBh;QuR z+}UZ%V_^7^cOo_%TI=RV0^Gsw$S`AE`;I=oJ=fbnvLhHW>hR;)(4)SE(L`hyX~g-^ z;!g+B2a-=};*yGnG)`O7t>q}`o){&!LiHe#b!gPTT%OEP73}I_VJJJ~??@7^O1ojG za~Cd#cGN0}^@2<7Dnn{QX<^~qR7Q{1u<*B>l;BaX@;;!q&3qe5?DAuj7YTyD%bPe# z8m52q&4E(F++NzeO4&K=EJCfsL*?g&Q1*JRJbS6Ch z8XAJI4AwIR?%s$fB_MWe$x9Bcv3X%{iRC}VyCylO>ugiE82fD=76$obpxf@edEd5gVZS_r->6qk?IphF6nlY4)n{OLj&}0|G7;5TDc<<4|OjqJ=Xfa+s=` z8kfDhFn84ZO*q3sAg=&Y^bHbn=ls}8{o?MfplP#H^|<}qbh#@zE+gKE8Em*0`Q zaqQk1Mdq`RzmbCAZd`7oav22b)#BSnVvUA4Rd2Oka5=7!pg?87Luw0}NmK=`Q_=7o z%ZNn1fz>sq{$fD^BO4HjKy%cO7m!Y!DCTECF57tH=n#JxFW22lGqRq}T#ZESH=AWv zaBu6$r}Lv7@W`*`o7Pa87OapzEzlw@N1ejk>bbi#Y&ckjbfa}}Cg!xNLSwW`$6fE9 znnyE)OSypv<4?NoEHgBa`>kGFla808ZQJ#+;NFudeX74(OxN1>ryY((>3Lw}roqyW zsD{v>7fArld90Ykc8xLr4UO))2d~Z=MTFE)bVIzd=884*>aZ8aRJG~*y^TulPT5qz z_%>qBh)G)K_safpY+q8g5iaRMFRw)T{=Q@k?A-uxgoajna%)m%V^nvgA$3R`FWevc z^9#>-5Ew07tFG`z^DsG8{k0=F->#d zXUj_pz11*per0AhUj3-LJwc9GwkP8Xpv!x;wj(>hk71w(__QKNz+c?wy8+cF&(7Df zRd3Hk8o|D~Gpm6!g?^{6o5qowCY7B?F53YRa+|E-(r|w+BDJ%9NJm=PO=TAqU?aMn zzW#2=azMA=@7|fTl1-@LaVUa%fHl9pTT|}bvBpiT@lNGx{gig(wmV2i`*@7xk(PtP zhA=RC>AFZ{?;@c_K8K|7sv$8jxJ5v7ZitSHqgb9QFl#-P&~?v{MUNzqZ8@VDVlRXM z`eg@Xai{yFqDv7bXP}-)PYp1HN7aA&tUL4WsZRddD4@h&yuEuyZm3ty6CII09-T@DIpBBaR$^N?f$OIa^{1J+8^_; zwKyd;OvX-(1e2JJv#l;+XN=I}u`i2P&EJFWFCC-rDj6($9_`YaQ)A(mV(jxAEQ-7Y3;);hfLCDcaID(Ov>hVu#{7K2kb`%v6!0K4#dlqR7+{OV1EbaJKdpf z@-4);J=YkI6?BFDu)q75*A=H>e~CgyPn%9puMMbLc<6v=eo!hC1&!&psU~5`z)n6F zlko+w!m68}*hg$7c*+(=5Jy`m#Sv3(ya(F;iLZZuqVy+uiWl}=E$$e_ovzpT(Mi-c3+(egU*;p+}3sAmjv^J-t^D$**ug~ffB5go^*u;!w*nCzAa`*Z&Q-08)A zG~%>j@i(TFtDJi!12Wh9Xf~S7dUd&3|2e>r0s2sBZ#rvR zdQZ}!5*Gjz*7~BOJ)hpUGL;!KJ#~1#s6xFnqIHD|&-zM(%Tk%b!aUsACS?zaI+h5VBn z$2N(*pz4n2t(t1;4gBQBxb#$sj5F(BAYj$qaq z6txu#i;epXImyKFt*1v&Y(rw|O-!!gW~%?4eh% z{qNN$V5E->Ls^J)Ah8NJajNJetgC8;3E6DXo}7eyT48Bg_0QXm*S&om6ZyW+c48S) z#Hlk`+~{v{vBL?xTsr07?To=O+EBmiH)C_r8^10QZ;{RX2UUuNL^VrJ;X;iCV}tvM zvRc<+_GW=D-7M^>rcSYNr{^v_;>oKf@$e)qe>IC5KtDwvZs@JwN2+zl9 zoDlw0X}ROsthZHyskLsP+3K88L4Bc|X|3I>)|0{J_|E38R0c`?d-V8q@L{dvs8Gq? zu@JO7>ladmpIYswoTHkx{wPE|3T=SKxK<*VOG_EZog{lP`CL#;tS z!qDh&VJBaz@`vWB)Su}afc#LG^|4bdOhO4Irz7P`;&og3;ei70yR-tFaN%O(Tw89; zRN2t^GfV~j7vlMes0_IQ5zoqUX)rv1^>x>xB9D6UY%Tg(_yG%RL#!D!JwF}JXkc|V zoM5)Nvr)RiYg37hhO8s=>Oc-WEio`$p*1z)QWJM9oG`f2?-8B)?kn_$EX>;Sb zmlNYi3cY2V2)iqd-`#?kFIm9~IEFdi;J>INHdELSnAk7MQ`gk=m@G+;X+#r~{tr_f z->}}22;SU;eJW_@=Uh6+*)Nn;vjf`kHbp-{lI#Q|#?eBqsBB+^)MevP{GZqqnS$nj zosrfY*tA$}$^BS|1*oRjNZias?192MgwSe(YcGiBt^0JDk@%kpe$bZhtS7rZ4*zY& zXq*Hx5uk;wPV|!li1_g;CUU;BuD7}8!Qyv4%nmri<{E-j;F(tJhSogk6;Gk-|4NPY zFWvSwq`t$fxY+UN!TRgS_@=wwRb4#Qx4EKk7l{vk<}>+APK3!AU-4E{a*eA^^25ML zJ!II(hPN`oKwhhqXJHn3z~S`y3B#-EQkRsA-r2m=(se-Pj&G zt9^=*Z7AdM)lRr@Y#qgM&F3zitBpt`lRNp?{s%hzm6*5afMxOuqDnVPPml}0SQMXz zD5f|nF&^7?>{r<5q2a;m#(OQL#{bV!SM?p>mVw;s{c#l!58gV9bzg9cGPo)jN6ElC zM7D23MU7xgo**CiL%H>fPO(TK$=*T;?EQ>sa;YZGzij{weM)`k{CsF5lG4NmpZ9jZ z|KP*^nX044`cCj)nnapmklVHITf;lG(&`>!?i#4@DtejY*gj_0I>H!{h&>vek{ln; z37Z|l|CQ_8^7PAiwfpcJWJ0F>=oe;!{U>t7kQRmhHpYOvaj}U%eE3izPv^g7F82Ss zK;i!;4UGu5SAy}aO+P9riA)cZ24SZ=^IMg;n%Wo2&!3YTO86;XaiUUFWx#pFJQkq* zuuYA)v~;xx^8${o&80sM@e3sa1^6FTTooFGk)7Fh{H(sbMUc5jyLhJWO#C7J#K)=Y z>|F0UTxFNc>uPU9MOC=AcbcQ%M7CjYm@WSC<@-a)M}84zaA}J9ay`9&phS`pqw#b> zIJhZ~)(za(F)(_%)sD*y98RPykE^8XE^qWLzlkjw%! zz25UaxY!hNyx0)pc09;}&yr7X+9SgCvw^l*u8;Td@c68;y=>E-A4uZn^8vLjqu%l{ z5nLbSL-Y=ArKda5aNA2RJn(*h_L(c!5iu~>oXHXrxz70Fe6k>~@c(pj-r;P1|Nl2j z?H#REYPNRmS-XQORf5=B)CfXpY~E&UtyxtRZHZAU_8z51?b>^*6`S~;e*XIX{rfLh z?&M0Y`#$%%U*~n6kB7p(OeWY!21)`R5SSH}k!k<9WZih_F4dQHT{%4mSXGeL1T;Wb zFOHLw)8p!h@XUPW1;9l5d}x5x0pS7Cl#y+Imup!%6<)jMXIB_Q8yO@v!a#zPb97}` ztm(S(y`0zI=2xOAam(spAm@P$_MRKrH-}-(1AJc#3yaBeyEs^0}Q2M`Kf8jmuZ{yG!bdIZSznyUHQLmqApl zG7mp}GOzHvkP_g|)tnKQz9%UeGwX}^E(ouAh4Ia`Ki)2ZK5zU{qF4{S1YBPM(2EF# znz2(5?YEF{98|`LuhljKw1;mkUG9i7U1H-MV!LDd?Uay&?`faDtaipRlj7-o2v_3K z)_5dOJ?@+nd%gwlZVQ)`nl1O5P3s(xwfJwU*v$0c_&6I7L5X{VYBt=n-5~k-Mf_ol zviF{HP1XE7o=F3ett;RWJ+Jm>z@t6{2r$Yw9=0|iW>8c;{9fS2*QY5im%g=nSLc*9 zi{FJdXzmE+7Z$SYOxI)*5PSz*&t?1C)I*U5ESHgjmEiK6!*aJNH{}@}-Jb(t7bGAkC^TE^>G=I+p%m)ZMUDpqVwWz9$cM(SuRn0QiK88- zBgP9QkS71TUW2gPm~_18Q8G9GG*V{6C$Qd|d?oA#;O{G*TiHw%LZ}M*ti;vqN8Bqxrh&M z{`~l`T@6GO$jxtj6Y}rU6+gc9jh+E1R!ii`j2G}EM7sNbwChVB)vp0s0&!X+S65Nc zlkUKaW)_1LAAyAYXJ8%4mEWy@aqs4GrqY!n(}Jhq<-@mbOL?rV4^ow{S@y%8-!en! z=pwhvy3Ul&egj_O3qj3Jm7e~DR&4#tg9X@E1rP-8|Df*oX6VaJqeKhWS4UNsomu|p z3HP_IV3F5+VR%hvfHaHmA!5{c*DsJ7v=wRnHK z`lPAHp-#7r*jJ$!ASIb*vebE0d38aZY9rA&Zk(QxfjS1CPq)#&ygcMWYsOeQcSNn2 z^d|RJfP78b3vC7_CVN3v=Y!>xS3aH58rpNuFS?KCWGb20AfIA2(vhh!}*}cYCWFN1~E;Z*r~FE3r{BD zOTE~ZocBKf1vvK)vKqV7T3IQR{(;%Ee(-yX|)G{_t1oC!q%T!Px7wa!3jdqwc^{XvhM0jPc$XE8H?-_ zDZh0hKX{lG8olIWnv^~}dv{Rni4IDqkQ9I8Fo-sl{%$t4+_}-**Dfab+p!_hvWylY zQFS^CL(~rUNzZln#i8UqTv?^1jSiW9gIMPuMR%L0UT*Zn$va^zVNsIK)0i@7fH$*T z=^b*2KcAsKiDbnyw25S%kRv7Z9H0>|fNLU8F>Y?U*o7c4J^i1jk#_enMn%p0O9dNiS&(56kz9uMrB}|cMv#bP{3w)*49FsvcHsJQ z7JMu=tx~a5(ohAFd}EL$^cwFv7O%#_zIuthWvS4#x)_Rtb`2NUO!PfVLGW^Omy>uV z$CV|`YzF`nkU#(VPHF!hbaM=?1^e>|1}3Ws)FywiGR%pZn%ad!I)2Mxu92{-I-#om z?qVrHSjKp<8OvM0%+u_5k5w=WVOt%jcpY&n!mUsrBXM$Xnw zU7!3PV0Elcc+xb6AI2INJkz17jM`BjI@@cb#>Dv23t2QK&WvljWUaVof7Jw#{qvEh z(ZP|3^3bGM<|^-L7uJED#kDWZ8`~g*4*S!^5ujQ{LU~*DY4W8ejHOo!wo*Ndd!39nDrF}pn?z|9I zCi%f9$mYjrieFpn_hFPs`s7?z04@!Sn)bh78(P(rXKo@E28sWzKOH-q1jIB55bo*- zRd!os$a;`yrfTr;nLh2p3DB0jpBZ0){e0#ADoOQlRVe;7!98M3T=UdahS^NhgfpCH z-1e49wTlYKA#;rrKbHlvr8td-K&3NVL$QXw6kL#!)=IE|t0?klv(LFFlj|dxZ_^Bm zVK&cd@PbY3;1t{dr;HR5NuZ+@nKw-o~r9LD1(jFCOwy)J;11 zy)C2R`jP}5OfnAF8?!g$6z-&UpaPPdvBJ9PSix#(qwXr6^;Fvv(|efeIe(+YLHRgb z@VL28Xa^3bp@_N0)ulBASLe+XIuK(w3d%uM)V4VKkQ>YzE64lg61Lrzw`N<;j@WnG z+uONR2%we~%1x(>FKD-c7Ki&oaZ?jVL91HL;O`soui!h|`1P-jZei+tO7qOxW(PW& z=re;!`E)(YSi>I*(xB2VyL6&u)s8DZuOZYbnB;53J^7 z5smK^{QYr)`5Q2PMPa(IvJJdAd^Y6R>FH7$s5nI3?sJ-B*1&noFkt2H-fcMH)X6x` zpH}vxfEaq8?q#SK(eweJ@Xw<8UA4EaNAM=U3Q~g1mmqjwf#=U7kyqkJfPDU2AKa%F%9 zOz^MV^mJ!T#v|5yQC;jl21T@C9-d%Afythp3zWFo`cPYzMyT(A~-bG$9m{vcg?m*ty1 zUOf|5Rsl$<_RWiL6i0>Og;@jPO!rRfx9uL!)yXF5tKER|c5CJqnhtOYC(sQ+mE7ow_U@AZS_?a!;=x=T<` zM-lJNTuh?=-tSy)VS#I}R0Y&~i+{zN1zIhfUzY2|@dJNhkw?m{fO$Hb-TiQhXU+(F zGp&4;x;F&WXadRR5O^H>jL5l8Y(yJ5)XN8y(>+`hoJkU*0{A<(a(x`Q`MS6@p=u-Z zfTGR=1wzk+81kFrsA_?3Dk=LImEtH3ML+Qgm33z$|9jGe1WAX*_JVa#Xe6eo;XNqo zjP-lYD~#Phpxdnimd7i|$3yp5K>&9U;}+K~C%Mjk_wD!CoQ^%bp4$W z-d{L|p6X8Bpx0I4-6&Ne#YoRDgm?AqQcz%1V!z*{?{qWuQV{0J=<1QIGy&&w+5{G&FiBY8S5z)Oh2RVVVQR4nzHou z#U}{9xbIvN10w_55bC^Ne)o~Q9e^e@8d}!BK5PMbgu8LkHmED*wDbbP6N;SBdAv97 z7-T9@U^+`f78q)lDJ3G1KtG_KlO9&ScKTraqSO|+f94%oy1Oz3773Vd1Ap2P4;v2v zp7kyF^F)7j*bfA)p$dFAV!+Y-a~4gYG>UG;A-W zkc)U8RQYBU#`{Fj_&~m{fCRI6rp9aXnQQ9Dt2zu;CtYIY5@KTfHE>?z7vR8RLpW1^ z{c>uA>ui7`-g?{z?M(sv3d>T5w#Rsyz5eT>?c0J_{oIi4g>a5!AMyd9sT+3M7@l}j zF!WTUL9prO*ukvNZo$Zi_PbAFVr~*hto_>>i?t{GDspVJ);0r?kiEl!rXDFIE=BMx zMCp9}ZjF>d_h+#9^u=+sqk^DDXp@e}duvs@4p8}IsJ7&uJFLtP9_m6g%% zC|3~FrX~0ch7hssRaw-}B_wS?-#02=4zBI{xJ7_i{gt440ni#0BPd%T!ft7W1Ye=9 z=!68?#RH|oU&GVpna@WM^-h1p)0}UJ4SYPy#+*FTDxj6~E~Rvg>d#d4dF!??U~Pnx zc6)&$oNb9^oBHD0%oytJF|TU#ni=*6TS|yE$dwIzP3FB_cGF{X8FJ>@bt{`8U-IH;tb&%0X$9}A6!VjAig-KbrqqqO0uG#?n zY}UZUMFDSN$0i_v$SqR2y{H5gf@kNY>o6!vrn(1>GodUkZYSa$qBphwPQi$<&C|G z`I-%xbDEgDpJD6nHwaA1UsKj8xD3^V`9$p-V~ZDObLc5cB(0I0FisC{t2caMg{85a zG|@@c$`yLQv-9Ir+CHeRG4==b^VWY4H(DLMiw=<;%#eoXgpvm&bBb5UXZao9ii(X* zh`n=HM%bLby>h|C)wgKrR1Wi2+o@%n2c|sRrLjhh91`0n=xSybdo6B7N^Z02opPH^ z7ZA#MfQi=SwOd!jkxe~!p1)6oMC^xBOxtQ7wf#qo#V(?`NEt2WT)n*Vz~ULz%o)w$ ze;P!^p>$96melr*Te#*5ZS5BIxHH9*>0Yh3Il5r}>=1p_2^VX-$IENT`&Zh@{x+`F zr-8YxVXL^V?;sn>X2V_YPXt?vQ=9| zr~9Pi;@ty{&6s0vQc)#x@_9UIIG)_9c-8wFEUOD_gNcb^vn937ItH0LOT=_dIMfW^ z$D#=|oqa~@sbQnu7dzDz0JL``BsqT21%oLR3=&7VKAtIRsH3(g~78|AF%zURciw+A(?Y?;M08{j> zxVRKAH^yVDt&8R!0wX+cM-rP+=7RR8H@)Ra48aOm^j7)E+G_B~2t!l^X)X|rb$=ny z*7%RSZy?;!OxmC7%(^2dhim;OUluLo_5ewxD9oOS?bmv1YN=aQf=duJy8_qTVJn|E za0kh{^k}Hrx@K;T778I!W0vYrkn1NLzI#e&m_vHvH^lYT$xmjVwSF_qG{Y0)g&q}u zzG;{DJb1l9j-g(G?}lFF*9+`NCF(T{8qri---!htn16jE{s*7;j`cJ((=i`3pHJ*t z?xo%FB?QW_N1HQs1=TFoJ#iTD{@sNqd=iyDEuJoBAx zIA|79nS#*s14d3aJ3AXlE%{QS;Y)J!i#apSMB}gfjxuEr@1`+^LLP&`au@DvH3aY- z)0ND1#`}hkp|x1*{_^*@bH_>p^QZ%jn;3$JQ?xr~P>WwYou=}sQly+J6nid~Xwk~U zk#8^P!lnKHc^X^0@I*anIHuc-_(9W-duLY6rgQLTk{vgd!zmYWek|bkxtOls0V_}R z$F>Q4VpmGgXtZh>TUny8MO_MYrI{b9O+j9Wx(S_P1&aw8DU|A>M66p8&2q2RTY|Z$ z#!<5UrDXZ=m@||S9(mBWlv5czWXlm=FBN%A+Ou=V%%}sfYG7=hD5G`@(zU6UP6Wb? z1fhW!6;^GpH$Jy+A$n7;IZL|0sI`zQD~%m!vy`IES8hF6BqoR;$~ibZd}YPdg9sFB zr(=VHJ`Qo+(#%GFM%Zsk>FMUK3EmQ=&19<9)@dh9GkExLDISds)0KprB%}x$3&L^p z3K*(iC!KOIZ>WXugWJ3EOFY!?Z`NCdj^BxaBLTHZp@cge+7F8&4Gtxt)U%o7JvMDP zhwrO1RAnQxOYVTDV?xc?=_CpA)&2)E-vII*>w5fhAjR7l?Qry%KU?Gb4VEu$)b}HR ze`%;W=xSruhlA#R?3xvk+H>zLX1JnGn)rvQ&zZLUVql*7XyLm>rrH4!*)!qfJzshi z7Q=Ea#*yv4H#QJ(&6a$)X_=@zD{4~ZR3p1;J=ov0-t7-N#bLHS9||fe?j)AjrydL8 zM9k*v#`N^`By2i;4t5HGLTdP<^|{L}bLaM{$4}}R0aHPY2VvMfUMxhN?%n2nrnF^45pgjeaw!oIcw>w^J(ezkpJSsnu=d_R}H5Gpl|Ay6P60hO>KEt-5ya0W5^-~_} zkXKzrg7x|l2>jtWwbt-O;d=K--BzxwGM%WV&A@@B3BpFFv-7PN-#|h7BL?9#u}8fi z0wp3o>!qPrM}L(8IXlwm-PXk%dJ_CL_0s?HskmI~n2Y$3ZyiU!3n^ks|6At=Cly2F z7od2DY7QAp+$kvFl#BjCNXybPq6H&wOo;v#xBmE()DGp_4WA!%PH(nAE#vL>0rWF? z{#1nm_k%l+XU3%&H{Q74N)O}o4Y-ivu8dM+WDFdYO=7Ai_Vrm2**xgf#UERMyA(AU zXMep|cn9#@@w4W>?j#obfm@OAcV=EQcyfN{?@1}xAT;GG@$pp0Dq8em4uj^M(mA5C zgv0#%cDm8@vUt-7EH{v%2sT|F+^=f=B>t0d?z1^&kFPb$x@@^1i>XIz)#}W1@3*{m zi7)nd7b(5sp6TA2{+)@(N#^tUAsa~#yW%*KO3danZR+R9QPhP+{A{D-e#9-~tovN} zjb_e7`ga>p%gvVU?G{pQgRuYHF5SmAPKXs%#{X)};LI*YB z1vT-09Y9wtcR(!SAuK?D;F4zH90O2i)Bj@0C`IJ>pJ@GW$7@71QFZ?s@Bi6h`~Q2H zN=Fu0BlOs5*KGoN$){IzEQL#E1T;DLi8+ zkni;u$Pg4&TF8FRe~;R%c}(5u3-pz>))k8a^d3W04`F(nMf88TXzMD8{1j4V-pSNk zBx3#yuzx#vgPAx!3)&)lv^lq>+5aSn825F8FdofP2KR*Q%F7}_0wp7d%X&c^Gn0C> zr%gK5u^vCeaW=WW7s|6t)o~C)6s;VTOMdGNTVHFKdCKA2X+cDB=b2giX)m4(4X$4> zSlHgIo%Ekxv6Lf%cCv-eI_!~WFZqIM_j%PPmx|WGIUa%=GGB3ailyp-u1DU2AG{f( zdbJ?j$+e-QLS^h8L4-vt0*1}VK_T>tT(D|3#cbA|c_(p;|gb>`_-Q9z`26uuJ+}+*vcFwu~_`jDculGJM zdiS8acXd^*T64}dYlo?PmO(}!KmY>+Lza`3R0EEuU|`@(a4^6pM6}3S-~jHdCL<13 zH9>R)oIsh2DT#rB)y5#c8bO1BS-HqbifQ~bIMavr#2#AsEeA)1w#wAf-mbnHf!n-o zKzDv_K!h;xdmy46*VWaPcIhI!-9CPNCr(_B?BZkmskS3)DQjshjc>v1$Br?cv=~Ki zab6yJFL>|${)T~>nuG);Ro~zH^1Qq)sHwoeQw|gsy;c^y=$uAeoSAUxH3S5op_H_c zk!k&DDuRPCw=yRCuJ48i1_my^_M|s$z*d-QWK(KQs=9#PKl%MN8aT+b{+;~xo+6sm z`0foq_msnMQtJK6@T!~U6X5mxLv!Cb#^?M_0zptn2(rOiWjh0NeqkX+PhbDd8@s~t z#j9IKdT$9b>YEU3*P(^X)4hwMxq5w_U(zohy6O z)2-5uMsHHm95CDSLr)K_$uG8r$D&!}_{VO&!N{Kqkx_UnpSnv%`58=v z7QE|8KIS^*b&Z6IqXnKD?dAnfI^`jwqqT2Nea>P8M%BBbqnAqlR==6$*ysGXNA`#A zvft<|dbrpayMg9vKrPQh@Au`p+#e?l!RcQR$JCUdgab!L6zvnAh2i?wd;WOg4P=b3 zkGX+B&q=d_e?zunDu)MdLsHSf>9kRS$^Fk&@Zk+Cpw7gNT;Km(22mJf`Fi~KlkUWV z*uQ6{g3t7FqJ)Db{^!3EX@gUBAOW6#&$)_1!*Z_wH$od=@?)JI59j5ICcscd*@vQ% z3Wtikfb!r8A<$_Q@tj;;Q$W;d;HMrxf2tf#<(cO3ai6xgwk}Of!Hv{dY1@22r`M1p zksa@kXIKn`?={$O7+iTwO)u7TdB5@VyWi3YdE6&@dLb{@o1?C^xkQWwHT~U0m2zm@ zYGio}Q5bHxH+ zW{+mgEmm7f>*16UojjI$JH%A6QEy=JOwpK1GNX2xZZkcM2r zh~Q=b?1Of>K?$vLc4#Xhb0+RU6k$%C*|>BVCM`}_j94PSi`D!-Us31f7Ntc3Z)jfN zWD2wHTp%(y7`fOl8YKeNRwvWL=eu&nOo!8)^ZeJRo4Vi*xpK`)EKD4zm$!!{EG{d_ za2zIf{on3ET(N}K+i^DhGP$yGz0O|DoQ9%4^Zj9#J^s|3rlT6+xGbF(qs2-Yvwo1U zW!4LJx7!1VY#C!I)x*U%pRHD!gkAr6AybSWRftPd3S&A5ytz{qU9 zyu4CnUo3Y9kt^Uq5y769o8V`QwHeHMEo8+GfpF-%tE@(Xfi0{-af2{tkvsc12BVNm z_2wntk7mXkAM+iz`wRZ0)vH|Z(o-QjW9mH26d`U;1|#gzJe7RTDg+`3nze|_A4h(y zH|lA%Ra~?PuwBx>RsNDMS{g} z@ze^Rf4kr1O=PvJOlJS4|5~-W{XiZcxpOH4o;m_{3@a!oXt`8JttX5hg~t{-LJ~(J z8BInnc zCC;VD?M-x=3&KD8R$RK+uXl*Oy*v~d^##v7UhPN*hzsNhy2VS!kVplAWjQ{=Pv*8=rrP^)u};0~^>EH)eJ{xAywCVc>q-V*k$iZrNWt~q z2RAII9K8d~GEYCxbb9h%6C`^$!>0VmSpRJLyz^1a@^1Bc$O-$albLOF{sv|ZsHkA8 zI$OviSZT|S)4|Y3cs?i0OCJVoxXrM~)BX9jOw!Z!VqK0>2Bie}TYA+)!U56Ts!uoV zZ(f;hcP5hI28IkagL9sCX>+#mh@*+7_JyF*Xq1Lph}hfP zb7#HOb@8Qpzu5jlL(0nJw$&lvvBN+n=GTMEA`^osLtV!0*HF__A|*#oz!xiayLmTE z;HS4f0LN%>f4t0ltf@5_inSy`qM9p0_Z0Op4EThV%2NJsh{cHof2!ap?oMO{*sZnM zQYN?m>GLln!;qw#GQ~FD@2zEu3mWB`3HeeHgL`Vjdd&Mq*dvZ2z(C~LEY->V4y79kMeqC3Owa=^ zrPd;HIIH}hv4$EDq%T&*f={4TmLul>p2(*y>FPM~+r!mzxq%)C!;1#bCw6Ab^nP3x zgF>kY9C=B%>s`6;cgJ6@R57XL>AnS7M)0FS1Isr`k#bISFe<6o$-Hgu-|~f-bgq+9 z$DQJewVZku;z&yDd3i=6_KX#?tGhQ)T~Tz@P5-_KA{1pGR+^1 z#8(eRQb!P`^_2iJKwM)NoIEjaP0VtWt<+F7aRE3a%#F7|o6kq*V-uWlns{&0=S<$u z?LW@Y#$WqT2P7eP_@~b05@-s-u|I`{RW$e&9H>tQgAyByiL>lCs*d&aAYXh7>OK^@ z>$jKC#zWdM@Lpv~23;Xxxc3D;ZV#oh_}>`3sc_U5tNpL8uCQktG%J_qN>xp)rt_1& zb2hMrMt@OET`tqE=a-7HiytT2vC3^9c9OK%({KMVy*&`2CEI;KJkog$cc0cg*1_m$ z`q&8fISQX6oa@TL%>(WM;@Sd=IPo$d>DK-m^!Jkxu-w@x?dbxU6wc6NxZxp;KHv?f z?YoqW*rTv}{bhkj8#{rloLHuAh}c!^uHsV*xy&xbH-OC1Psv=FmRr zFzih4t=H{gTv6Y+EL*W-Yz*>E$hy z8QivcT_5Y>w#y9(DNy;+Q4_+~o+BF_ZW);Y&B2>13>;r54}J0dCL8q{P_4xbO@&>A zx^*VJH*`=eyH&GfN%@?#ILX%oq%KPo)9S>r7ze)!GkPkaVR$q6;)1D=4nLo!ePobI zg=2jqbz#d`p58kn1QVvmG0sIn>#+L!>-1$?U;8He(ip&2K62j(-Z}o*&sGHE_{iY1 z7nXQFA;4XcGZF~}xO2Z+k#alGkykz+IOoFuYY7A)WFxkjwp_8|pOPx^ZH8@b3G&McOOy*toOd*_?W&CmW)KC;n;gY z|3~@t7dWCJ>5v%2C(Cz4Ec_ayjn&iHceiP z1Wrn`qXC`B=Suk>5eW55!S-ihut6=MW~54c+lR{;g(XUAR{jKZBhSf|@>x0O=3y$= znV2kEVx=Xm6l1kGd!#2DzNDlkNedNTYi`6 ziO*kZiq9Uvt?`p^=V}aa;f_zr+&haup9twZ_*GYaD2(y)gY*fz@Ve(4gx=5Gj~{)! zvM->~=BIC9pgZ}1cjbH+_{WI0B>zhxJ)7iC6@>$XgE=T!KX_DnmlHG+fLD0jq!Tq) zURTD@jGL8L!@h&$bOX8xCcN}`$2r{OzROplmw|oztF2YETu)ndUf$J49$a}c8-;Bb zy{Dws$tpf~VP(eXP_g0K!qgNTktPI0c;ZyeBeY|sW#dSzVp9xUIb4{Rrw(Ss@45@Xi+uWI)C|6EL@ftITUr>kgS;#Sf`j;aozr~;k~ zpS^_|XXI8u;gB2nB~U{LJLqa3{5>~5I9U2}2cT5qTII!G0$o~W;QI%P6^$)4kpYed zNhfbQ&cJL82|>+>P}+5n54^4|*u)a1<`!k|HkBy)Jb~T}@Rr_m6ZE}0SLjAYv^!K^ zx>d=6n>Pctv$nI7eEd2q3-Yz-O8?!O8~W?fTq-xP(Y!f0c(A#rqy?AG2>jo4y9oG@ z)RM8%2AKZi)c{Q`(5&JAlKFoGvo9s&vfSg=-vS6>0 z&RIsU-*(%y+UQBPZ&F)F;yUMx zY_8@RJzif;VKM9R>PEw73fQmfxqrMvVl(TdcKpU%jD8g$5o}Q|QO+su4@($s>2STs zj!e?3HCBJVJ=!Byli~e&36|5GgMiH-4q%PM4geU=0tlT>CYng^Lc%Ts*0ZCA=!Hkn zJ@Ci(*9Xv*=lOYF%UQ42l}<1w?PpmJBSb>(iiYY9f!ZFaMGo`N3KBY`!=7-iqnUj5 z-}8G{Tb)Zy$IMnO;g1$;F`M-Anf3W+H+v*nO+Io7xUA$$HsAC}NkJ2&q8U8)#lgsg z0P&0p(rY=W(5@eOBjK&dS?_QgiwY9?yh~0Tmf~O8;`1S_-{N49jUP%XB1|nC8!NC) zes{i>G2#%l+;}1ij8e+Z2R5m2M~P0O_qMfuE|3e=fn@wsB`D6L-%BQnAW5xImfqI+ zdKW$<(EZ_D2(?hWDKV@{UtqRB46_W#jGO=_&}E*U%9998wX@p?sodP%<%ObCs>YBA zSw7!d0>tjy5iVSj!`2^ga0-8WJTkJ~44&%N!U&{y#WZ$+EF=`_!yp|hFI4I8f$uU; zCJDMBj3&^MRvP>ce|ZsJ*y=jh(=SjfV$dl0OsrcK1~5~Hk00^iQNo{-7?N%)vGGF4 zsGkSI`JDFwVx?c^ewWT^F@**oVazX8dT~H{l_`*kc`1xNTdF6#JK>4V@j>9STh+Qw z=dmBOWly?4U6KJvkMXs^&hr5bn(tu=z~SUpzyLXC%ltV`I*Ndt)b;VQMBWFK(RLFp29lKc9uUq zhPZ%kWYGPkF=F~jt+3_{$)`fMc|tapEL$~KR9H7UEKdSf;u|4=ic-FJdb&kQhIK?m z;?Lp(7>=!@LoljTl@kAGrieS9R@qoG^3ars|GVr2lWtRD9J?tyrBb?v)7q!6RkEGf zJ3}!$JM?>*Ncb}ZU2oiS*Uk1D6L&I6*3C9cde@yW7aLt1b>Bw51)`i;0Xoivp%}$e zi;ST1&Ay0&2PDvRc7Uha6e?xNpOe~#m`C7qq!S(VnCy;cjM0HQBUd@DF=-<00UQ^- zLfhfYr#Pl!;F$PD--4I79!+o^U697xN)$4`_kR{gn3T;~~#?+XLv2i>u zt3o!TUf$Lmy;di@1lkW8OIKIif^Q?nPPRA@(&wvuk#Oj}a%<=3RfDJ^SZsh2FzOX6 z|4d;$|7TU?D9UDu-8pA4^ z)QTkF`ZanNfvU ze3Ttufz5zbA_VTQ$JF%|0ln1}aIVe@`2 zL$QuDm}|22NEWn91UNhpt;hoaxv4_6^*wP6nBE&p(T04benOZUWbHE&>+bjG0<(q9 ztpX&1y8Eih^sdHTFJv+=a`L|i!ihn*k5?vb9uLv00ll*&lUf4n;qwg7jNCyfQU)M+5Dj}QvRhpdd^hdho{My zy@s1#5?DV?tq)@@ye%wEF@!ztBai=G&k5bJaS^3_cBB0ymAtvmMfldARGY0iv4 zLgMnyl9~0z`JBxPfYn%z>~5m19WOEZ<+6$kv@k4|>e&C)|*l(<}3scHOcN2J$Szj-2)vX>mZW1by&i zSoRe{I_-^)6Jt+L26jWHq3(UO%AlFAG*ueA8*SG18|CaFx&9Dxzusl59V`K6pgJHg z(F4nowp5^ z99q7Gv4ZpQo*&ARa9}Mm$-auQ?c5x6WUORjgCVG-qpr2ArDx#vxMt;+@9D%5UV0)f z$8!yiJ2=ZVMwoAJ4`w7kVRwH5-@|7QnZQfZd?Jk_YyY#G2#2C}KflB6X2e%U*BFdR zr$KhPT|UW5O;{+e-Qt4^PA_UPpJ!sb7ch$_`~Up+DPyx+=X$M*! z%mnJKdpY97fB^yA)Oa9#0D&erHMd3iwU<=7v&3A>GDodwR27i_iad{IOsVX8oBLc0 z@<854WSx$Ia9fsuY%F-+?#MUfeY-%h2q)n4PAVwNpq|EFMD2AvCzrf_LA}Xtvj|7P z=Y;fvQoqRasLv0L!Eg3QOC00-e2XJy*|(7y@mS!2l9sc@ra4B1j|C(o*^C3+S9~Uq zS_0u%^b`uxEI25Lsi_Kk?xm=vdA4dLp9hwWKVg2P5 zmNX&#{na^X*LJlJ%I~qQ@0bSjuph8-Xl_Vek6`dQ-%Y?$U(EPLCx}A)YFX2uKtikKvbj)7+2H ze;P`VWG(0|hA^n*!&oI(GFXEUDoD5?TT!`TLJ@scfML#$#OJ7!iWT)-0r{NC0}MLp zfK=EM3YmyE^~D0ZQoBCeX?K{~ORdSq(y|0lVP-@=UMY2~M-weIfPk2O9KFUD_z}5^ z>d!|OhizxaY@GG_Y)wlp6bK(U*WrdJ$%{-PD49)s_qS9v;Nouz^qSaFv)c`hV+P6Y z3*m_Eqc8kMEd0;dp&>p?lX~x)o%JcE#jn0902IsB`c5V?_!NdYM1*jdD+)xSRxJhw z7*LRaySvy(5!i;~sl!#MzBGO&5%yHc4~X5z=P-*8?gj(UMRn@zQk`i*BE34?h$9Mt zVYjchcmuHN&H_@>k`3{Az%wzX5I4_^90B_`R_lT;TUSGrRZhzpwlX!=ku|A&h^g>) z*9#2z@NTfGsGee9&Q3kpqc9X0Rtvk;78;%Uuhn+O?mO>}lR&yWII&B-Sqw=ABlu&B z#0ysdtc_Cx5x5ZFzZ-VD0R^K1Ab-ph%3(?=FlrmG3%smVXfMF`puLu=m*7|rF|Ae^ z6mnw((YSnfL*RNk^kk$g71oL;Yl)X+RLR6-_aMh%bDus0vZEdvjg&Bgf~ReiFsn`g*s_iAa;+9C`=7 zM<7x`OOuSN!4lA4OZD5iIbDviaNf}^&X&0p=eUj0yT1+c5(s;S!d)*^Z%iFd7xWkT zioxvFIquLf=roiF?a69Q9pInI~y|rY~%6%nkVA8v`#eVAujz5qsEWNX`FyO6 zP-h4??i-z674)OEDq2)=(y?xrn?{~KN8hY|Nk>^x{1t(1;V!mr`~H-)()(vDovS{z z4$t$_cKU|}t)R)-vdgYpo~ffAJXt>~no*F4>Th>szAlrY`xyh)zgL-LbQ1%@a8M$c zds`CQPew(IIt?nra5s3}hMBBJ7+h9!qT7&Qf93!@5#j6P3VVg*jncx)Yz)yi6krM< zH+H^_6@hGv9U#)i64CVtQxAb0ZR}+HY)ZP|r8F^s)MMZEM&l3n@GS;Vw;jDL@MX68 zWDMh*jeA{ci0@znt3qV(nvTuF#fMW^x^9UB8KFknUF`>qyX#G0z$c?S2YF97d~#Et z4>JsYyN^8?7;No634FI;nUz4Nlt%kB6?x8|CWj5rxgcy|WpXw<6RuFK(fA)xt;LF4~4?E;og zn3cw6VwNTkG3`if!65rE34V_MErZo4-=YFog2O>judmMPG)}H0v1e4$V6J&CC){(z zO1cvl@&GDFw?(ArUA3RiFM|7IbJ;`w4K1?G<)j!%wt=e(mOUU^VY)CUe# z@C!Kmud?E)yfsP0jTXn*Nh@Ep;dp9}jJEP3A96t;0}Y*#NW9wSnok;RH;Eb=v^a!a zSYmI?I#1_L=S!LUX?#&!0KT){(oOUHVzJien+|dnR>P-IRHNukG%)1ITJU_a$7n*H zcgi|(LI5f&6{9+?Co}2xCw2eye7s~EPi2c>YM52*@5BfMP*W3KNWD3WN{@fgHzbP7 zFfuD3tx;*J@x;$Menhk-b(NwOO@v(O_&K^O%JZ&xeyGh{`Y*2H19iRmZn75s6W-ZaCGiF=+S>VB41c zp+=`odiw{ru}<`AR0qOLbPM7FM#7k4Z7a?8^||ZR;!wS1K*`SrqPaxa>pAq&ZxV3> zbDzau5zeWX0jlhiMk$%F!`pLIYxJ72UpGS_M9n=d6zqQT)=Tdz3NuuBFmx_{u2>%W z$KwvFC6?iQ#gZ|Bau&Zteu8E_=Z{k@M(F0Srk}TmVHOP{A9~~7muyFjfeVva?cGD! z;hYj0=wcY0WSppmmI`8fqayM{RR(V*sbZ{QXrX>I&llZN5C(9Q2;>l$@L7o6h0AR& z^{X!j-JD;@eM!Q=xvNi}e2sJDKI*rB4eWePB$+HwDM!0M(*a63%CUb_Q;)wY%n#2(^uLL+464+P-3k?vLw^=ED?%r@mxo zZ=Fjz@%Bi)&UXE^8R`|N z{e8L%;H}Ye5Qb9pgMc1b6Xk5yD4XFHv-FlB^!@QkyACUXHemJRqU(cPosBn})yNnh zu)jEv?@zqx7pEHri7FU-8d|}+jqW~Nm`gyx59-VmvhLN_O^wofv#fSYM;+I5C1qW3 zF~z)9&A5t%8B#MUE&(;0OW!bteQbWKMOMWNK^!JlM+#3}Wqd6Qu#EhgCrih|D^bDk zFut-(#e`E~K%in^mUz8zZ$I)_&B;zYVaX@bG0|XiLN&lBYM`Ac{MqWxEj)`6c^`&6 z4nP`&#~cj*SgbQeB(MG(4u=G{N!C5xcv5Y1%~NbP?v3LLCPszC8H@i;x*uLEZ!uRo zP0=4)?Q1wQktLvwh@7;EuZguc0tu?i|KqmJ`?`XFJsJ@MM~d5TzgSaSG2!C73tuTD0emftmXwx0wBd4@`Q-X3`{~}x^0@z&>yZgs zDZoB5yJCWtQS7R-*e8*gkjo|0IyE=>$ZIwUY*rt65a0I|k)YhBnu}u)5uyB~TVUbd zjYi?bq!}AE3Sn69xEwaomTR``wg1B81zB9Txe%JlBWuEfl_LqCmw&AupbF*=>JFiX z&I@5fWFn=O=N##4L?e;{!BG20)v)TL8N+UxkY|#z7zpaLI`O?=!I9`E4f@?m;1R{o z3O^Re=6VAflQf{pV0SqJ)q24ig3uoWhLW>bhdHs9)oo=>j9@r_QGD_>%?u6n zAI9sr45(xV&jzmtN9zS+L`dz1BNKZus=`FHh+QcEsvHI~%x;%UDE3}Pfq)=5eZD)% zA2D+0hd3v7{tJ}hVh?;-17kVf&J+YEG-W+RRZ6=MO%cqU8R<7WYtdlOaXmfZ*!4&) zwWJ?|Mj}i3jA>WzekXzk_8O@E;Eu02437 z+`5~uRN`nsjfj?5aS3=dFz1W zTD!~0a9|=|WVsg`hlY5%oShY364@G4#vi07eh@d&knbe#3DX=U7SJT-B?u?KyYSdXgfaRui!@Pd7p!{Rr+t3?M zC@Gm_DG03htblaN!(T*aE*uT;ezxl;csQqCa=z1va&e7X{qUyN?sI z;nRE&U{wlds*u~}uwSavFWy5-4B$gj+j1-ZoJF&D^0%DnEb0Kpi1qQWe5((nF{PIF zH>=z8?FUe&r&J*4t6Z$dVojc6#}_B2a%z9!(}n67IMoKWWoTjUi6FAGzvZZiD|nd( z67=dcI|t>oJ#zqb3f=S@ly3L`6IyMUna>i33hvvgU!L>vcasn)xv?J*JK2EpGvx^x zc8i#eM{rLkhGZ5v__(bFAQkm&!o-@`xZ?v1E^$XSUy@Tx0&jIG-VA~eFleyiFs7Qb zTAEf;TRT%)&h*2)d_q}3@?oiwY(9e~Mj;V2lnmcB-JQ&lna?!-O|Q!+9Y+CeT6xmy zY>s@NSE!^-xhxM2^%6dlTx&cK>oQ_7=?{JLVk>(vo-u1kPWI+>wJk;8?0kS?cOW@a zLKhQNNf+Sth4OP|sv7RRA}+m}?6cq756T~>hjlo(hBLBh7Sj~?fIP*jl`AidsWl%` z&R+|pg!mViXrktP?>BcRA{XuH)*F`K67y6r7f!p?DZ_0-Kpf@JD_|PDVA$8 zQLi=IlaeUk?783Uzme~Rz3G_j|2Z0be|s@~yT0BVLvE@;YHBF7snT0|(&iXVx3tmn zi8`2!qbvuJ%*)!ZwMk8UU_dn`Keqoi2%!#UjUrbiN8fs&+k9s2GCb~9_J@ZPweCSx z$9&T4_LRs1l+}a8F8qfN^q=kUH=$hC^H^twsHu%VC7_*K9WX2GKhui}P{@Y&H8TvPjAMhp~c=rl>Ipo9x!uM#x_4~ zgb0}AnS<`3g8)6;9eA8rgejvucujhvUyP*&1xN^^+Kzop#O$n7x^G*!NoA1bdK{uCUrf<<{mH~Q& z!e68(4!l5P!q}Kld0g=mU)0RkC!pbnC$=Hr?vU9ED%}5nd!yAQi(KzYHHAK7hmdFcx^z|Ie zhq+R@vp)j4)s;vv?O-XA*?J1lP+1(-Nokm*7|`d`ZyEu@hMgk#f4u1JSA=B=hV+L+ zIBGt6evqsiC3^NPR7Bi}vhVuDn(Xt@!D5V7pJp69Zf41UPCd95KHj|bm`HhGaw9$1 z;We1|VY%Ci4qI(mVB@_)p}oI42?s45>}WlXLqb}<4|P>#nio4(rI|BX^WhSYFNg{) za#Wj;@bDn4W~uYsCYlgqDKrvgng+ob32dGbY?FpxoF zoR(bE&d{AV8pfXV>9STN+_CtT+o&oED~;|2idIz>+w`;X@bFlgTkE4!NRuf~)-=t1kQnH9NDeW)$=chmQZ3XN zjXeDQ6#k`vzX(rc{zq_dFHjewBSPe~F>0+`U0b8aieZY(yO7W#DWTlp@LOGc{Z{vU zEqyM@_t-4Uigtf7E2$7&6X-LNvVU@II`WBAbqqtYY`gZ7SAo-dJK0I3sT{HABHdaUx$?AGg4A^pB&6BdikY8 zsZQx=VgwRsI#Cls>etyKFzSuvjiOE5$DdMCMeW7R*2Sj<}@m=^plah$!sYema7Ig8b zmWIBYA;J;X>wJ;tbL?P}ksxR7_2wa+-<-Y}c^&%$#njtRH<-@VD%CD*vVG>Q)Ns!> z&A*Pc!D>yw43FVIVmvGMXPso@9gtIhwq##ubH27jrL`tXTKx_JZ&pvPay0b zWIWTg2}SDrfw29l#X&;sCX3(AKba|0c?_4OZUtv1Uu0j6u2}_(<~yD){XFBvxV+W; zF9qtcNqw^Y!+)o}^Rn8!%G#$=Nlbc9}x9f%JTL|e5 zgqPS;z$yr*4p$G_7EyUS{=0bVEycy4xygBu!uXjo1j!8f41RQ39tPRKx!uNS%;=LY za;U$LwchD;!1>1$ZtLJB@xfdyaQ5$ z=|1H@V>?d4F^Kq^NoUL2>JG4Tf=cK5AIRqCqYc8Im8=>Gl-mh%0zdPrBw#-+Fm&E}k%QZh@P2Z16lGykOdp|^y zc>a`FGHz3A?wz)weS+M#+jx>>Nu6REtLZwn6+nYqueJQK^{UO^VlVRB?L|7j*3W)v>Ao!+eTSR-UkZuLwP&KNJI9xC-&;Sx(%RSzCoWsU zb$;NAX)h0z^P*VE&KJamE?Pd7NsiFIOOu?KzP zl>}9*X&o`+;Mp}Wo>$3UO2X%Th2Ae)OhP90cinHkziy}xvdo5aw!U%$F;06u(M$MG-3`3`XWF2^!|J0yc>enI)&)LRvu;3TXshJY^yg zY5JWZ^aR&|_<;teBMcoupKQsXk5sM*O$P$u%j7N|*@V;&FXG%U=y$>n!G~lY!CSS5 z6Dwuqk}sbz*W3*S*is!#uCdxytGoCnpU;JYwl^Mab<_!H5Qf8#vS*51~>2 zC>!;>K4Af}KzgqvpZ6;-vB(d_>n$El22i&5(UhM7tK0G1FR7&^F7SyQ5#^$A+!A+} z<4~B2p1=x~CdMpb7kMm|B>AM$lj<%tBAi^;C7vo!JNMgiSwiAk_uG?1t?_rjbmXXQ z<@EdQDq>$k=BZ5m@MW!Rj@ z>m&MjDrcBZ!v$I{=EjXLF^4ByrA}ugH8rZC-3n_q2^O2Ur28MpW8w_db4I?tz^h#2 zB6-!fL8Xy^7$4;5OiF^r$ee_|bb%K*Bip|{IbMTR5`AitN!Mv^Vo@Grno~#8N{i-q zWS}N_K}_oWKz)8yB%0B&gp$Jvf9Tuq7v9>uS0u@>&8DO9(FOg}X{&#&=WP`3FAS%3QYX>3aT+bH8l)T^5(s^vt$ggNb1L z>CN@m02FLDazD*6dztxo8l@XB;Y*B`mTpWd=?U&JyQA>QK4G$PI&-v{<2gCH)vGL|AQ;%qK zIhS4IcgZDOsG4q_Eh)paj>_?_kWE^g3%Kua*vd?`|NAMqpvOy_-TCc2Z>`K^;k?0N z57clt`IpoFNuGeuo8qNm$#VFx_I4=Vl=Iu1*70H`ESL4#efbE2s2pYQI5XRm776KD zsNjpQuFBT?G1vF41?Jt}$w;~y*q6hRU=*A>gVVnupXbY!d1#c1ab;pg^WnSqFo`c0 zeto(1g@Ge-b_&dQ$f54(cHI(Z@%b_}%C8RK-MfN%xD7`ruktEMI-+`FSh+YQ~ z=Fp*lJDSdvkvd4Yv>Qa7$mit_|7|=?D1)iN^>%Wr&v?6c?u%>|4RX-Ub^h;{dpVmz z_wn?7aA{yy7d<3t$kA)%RhFXhI;;P;OWt!%BAOMj0PU|d2E6WPJL0HIFzY_3c_yH? z>r|zVl!_5852|qSM7HQpK~AQ<;YGerUp_)BNgti&EpEGdvd-k7VvL8vBpkQh5_IW! z;0XpEK8s%QihMcd%;!s~z4du{C#F`&_BS4kre5HCpA<>tkkD}-m$cJsr_pcw?zcS{ z!2(6P_?vJb6kp*_^>WgusZ=A0qU>s^HY!QXWwh}#dCpvN>o0bsR|}uen2dCPDKf8w ziG|XxxK0HzuO0T)n}56H2#h8QijN`s9EXUVxN~8*R(K#2qtLLJ#Ds>+V<$0>Ge}GI zV};wcl-GIyR^In{8nzwF z?#np#7zB(Ev-+ZqSR<`~Q(3xdl zAFt>kt*4xd6x3bklcy@Q9N2x%_28ws?3d#8$pT1(JY{Rs^pr9<^N4LIAQu)1cb7kzH_qf)oSwCOaeRQ!N~ zl5;eh(JHU|08Q6PitIaCFVOceJaDc=!=_`;c+4|pyiCck{7Xr7gO`cHzwd96nS@`G zQ4F?hXk0UgVTz&-#H&Rd@GYHXWtbtkeGqb%<_}xyOFmD}1l?$3zRSwR)0AMCj;%ze zce@+K4gVn6Ihv*Q#K)!alDsk^e=~~5X3= zG`iFaJO76W2DM=k>6|afy;H*6-T5xuLg}%J%={-!s?ZDzh{24#*i~_cKy8p@Ck=@$ zUovl%ak#>l<;5S~C;TNJ`sG*G_fuK;v_At8Sb4pQ-R_Kvx;19ZUy6~EjHA+`!_q%_ z{E&~Qf(MKaIe_RZYa$a<97ic#Lw1w@Q_;?1zsN=$GL))s?n`%PiY-3%t5(0W8kK^y zDurb3Iitgl^V3Cb&T9+_4s6@1)1}lwNC)RP}OFoaym9dy) zZ|G|G>VtvN>Ts4^IKze)>*4mB@*^II4A0Bc%c8H1i?c=9el(F9F)EGy?~i^23-6mY z&593e>^&we*<$yae9J{&Z;eJhe1@-)Fq1sP1iG}ej*iW^h5D(D0Z1tRt>eiv#z#O;|O(PIm}QJKBG}f~{Ps@#C*+ zZ8uxv*J*TbWF&;vVar3UE#k=b+L243orSi$U{(}1?9LU_D0MJef_19xKAdiEj#N$^ zfNfa?Fy7v>z>kY!k=FHPm*d5W_aBdTH#GrEnNo=@ig*DS!N|k8DT@6;DPhcl`!7ay zI583$sg<;fD~bZ|ftnysF?c-_6}lW&tn3quJ`*aPpZ5#vk1EY~F-O?LnAu5EtH?3ie)l+UgiZk^MS}ET}o)&j+WJ zt_VA0Y6-LJTb^Zx^7VqUHAk2_9LUv^d=SwkN;y6D5C%ONUX`J7kNfx*BAEf9X8bl<6RGqZ)F$e9#D2M&p7%+a z;O0w#J~i&hLBCEt$|JjDyt+{x{X*lUn$9ISfOb6xMQ(< zm?F_5FD^_lAQ_9djO+ZY$1aDQ=e?^z?Xk2848I~+Oe2pk2#X_$((7d4Www?g`^%jM zW*!rWagx8%&Fh@r1Uq|47&P(tRBRr+Qy5-7-EPo~yf1N-932;pHP~EHxF3)eqKdg1 zUXZkTUO_|=u!lC;FBuD5cLl-PM7Pzj`19Cpb6bh7+v+R<#&gm6$4Xt*^wxTMn|S+% z+hZTNjd3axhozJbLA^30>xupf#tjbI+Q83DGpO>0!ttbX`4d{3BAcikWS4FQv+%X zVMnJ&vki`KI`V1!Tzli7*dKM;k=QD>0&ns=($TpMma}T3oJSTHRc94`9BTVEOEzCo zT7TAEfaxF4XBq#k5OXA$tKYMN!WJ)->$nL+Tu}bL2R*$tWJ9r$q75!@=%zqZ>PzjG z@IkTcnW}V_P(hxZK1;eJ(nsZew;iO2FByRopjM(J1D_fQcVS|OjU4vy^@GnTwYN?? zlgXSU3<{S?JO2J+U1=!hT>@YZJKUbM_V*Rqt^P_IKQkXPNmhRkL0s|u{k0K_Gnu0> zsl{s0;SgPSFnXuI|Ii6Va<#2?%HK%uSsYW)Lja0Fv&7g@z^v9fSu?+R7__EAw_d;% z!zDuoV;pH8r~2J5zICnN$qd5mWXzxYfN{0sd`t(Vk|IUz`uMLcHHvL5SP@Lok^eRc zC(@^t@{J_CwHAb+s!?#?*y(qP#Zkx}yAXvfm z%t2%Ej*l`K8nA8-+#J-k!ADi*UGc%WMSS3sA#$T(Wh|qj$iBT69;erBjRYLIYl@az z)dAyvD?9W5^s5m6{}-(P_YYvSquH;dTwE}eiKV@t@W1%V0dHZO#Qe{I1dj5tJp%(lRr>AnS34dnPOqv(^3=cdTG_vpYtAN1 zdv^i`+0QJVvdcBsK+vhv$R{J2pu&92%>P9CWi5D*nv_QYh`z!$h;{WLE zETf|QzP~*THFS5UbSR-n3(}yJlrl6(gI@&+hX!d0DFJCD96E)e5e66#0qG6_N$L0> ze=nbx53g9P#bV~ZZ_e3!pU-s->Uxpko^3xOdG-2egreDdLl@E91?(DsZLpsV|NNQa zwT{+EkSZ|h1Uc!AIYChbz z4_Ff5aQM*D!UUjrvqL}`DgPsO!9)zu=_#dw*La5i;&t{tAhpBqIz8^M`yZni93lJ> z#4QqQU1w;`s+RRM+I*i<S`xCofU7oo9sqyf7zG^p|6?VDN81taEu(-IGoXtl3)h6bjQ{g>e*~hKo z0Wb6v9GY$w=e&Fh#0RcTSe^iaW^nkj=Sea?2ZC?^LiHeb&4W+`Sz2CjLey5@BD^t! zy+Y#}O0qIEH}`y+W(EE3_#HO$eGi0`o5ctVYYKvBC368)K@`>6+W9MvykWs|97GWR zTqOyqb80+~17S5ew*|Ja@=U2~9Rf=3#Y5Z_HFx-(Rr@;Dup|bhKz~ z|Elt#v*ir>PfvFJXz9&MRijgr<@`u1onwz#@G2rAieplqe4gcdIbI@Rk9MRmV{e8E zA$WBHivq7N4yPX6ne@B8`2@qMRV~z1c#i?kdy1F*?|}V94%hkC#FL!|{io|D>i+Kj z1|)tUKtA2zE~*+sAH`Hd0ki-y+p%{hu`vYl@@?9kRmrlxUIQ7D4?!F_o{*Z4kL5<3 zcoYc|okfPF`o9jLP!`jc*TUk$;^Zt&rsDx?12n9V%37CSnj8dADlMD@EZZn0U8bIl zDTWho`5 z<6~`@ZkxWiC1=B25a++pn1E4mPQBzEy*njF|MQzou9b()zp%JtF5r1a_u5DD0G?oqdVLuTK#a2sAPySrhhF~vQJhn-Tjem~26(Ur;lBaoY2nvyy#D}z(Bwu2 zOG|^5&r6})I5e4`4p6cBRkw13XnM5^5iSdV_s0;Si-Jih_iMDIMXsZTad z5KJ^;)E*d6`7`W$t!{rNw}w807?L)3j*w2I6UV#*`yGj8S&a*z{ENVUz!NaY?qLhp zcwX|vO3GAA{atfQgmoFWT*HtJ?OnUH0;(Phz~d(sD7}DfgIwhM()TBL{NNkG^;Q}X z7o>`|9Y`*%XYNC!>SW7C9kXSVtjfRM;cMulY8zS2>f_~Ac-9?BvGuEFgHzEe7O+lF z%bWMYP_WCnVBN<1czfA$m!`j25?uG@u|Eud;3;#Ae}^s8@$;unpaQOe^E)`zgTk~b zuz*uijW_d?KrKfJlbwH6OTIbO;2rana)|fdP@~!MzdFgcvJ&O7C=YjpL>yjce{j9NXgBpChI4+ix&2cl%XE0)Pc8&baT8J6fblAvOHL>4 zINo1+#KEH;69m{(Eh#^5trsA$eI}jaSON~PX2$qUwH#=xq4fUbGGBCIDFHZIh#@zb_D;Q^PMnnph*7s7+3LUh8A!d;R+X7eZ7k5*5!| zUMw?#|67}+>zLYSJ?~RoBtvEWr8|`$rU*VuEp##agC4I|61444NUfWeKj~|AaiBD+ zyyD92t>S;m>E133c5Mq@5-oPU6SV;?GZw7`cGgVms5EZxbY!KAe)y8oECSzDIkC}c zte{}L)S&+XnCg95br~?CyZiYb4YEwT^vP%Kc3NXd0W0J)_E4cjbueQLqQ%d#sSj-8 zNYg+|38Y-Py-Y0|)boqZP6Lpl5D?@_B0z27nQ2F~&Evv7YtjM7lNuaw&<-7CpX&W_rTmdM+>?}m%R z4t-f;KlA}s^`C6+g69oD23;Jw3SnHk0AAwc^`IX}GdE7`Welq?zI0WL=@rqF_m(4lozu69ZiPzsh6nf240BFdfkNrkQ3!)_RJb9HO3QmeS5wkK(N2q z1|nxY_Otd6;;>-C{-=Lf+I8_5#ONuGI~JAa8fg=x-CqRO-+=OnEzcDz%}$fg1<0f< z5Xv`b*30Pw^8Pg=3I015M z$k_v%_w@~P=#;P`TE7pbylYMHPwbsBZ4v38gw0=@J5g;$fb*~D-ty-y^}>@l7xjjv z%aoxt&`#Ug#;EeA@q?0}7}Nz-X`%^K2={C0K z-JU1;kB$DPeF~-UeUi5HTAPj?lK}AGR0O^cvgu%vf5>b=0r~~Vitdf-?nlexzLMdV z;l7La2QV%LfzKwNcs(apRM7;^m#((N#=}5_Y1{)%mDir1Z2E4j#{7_joICpLP`F=cLd04-(ZEvrWtSrHfC!77*s_U)wfHiqgcbs+r zj)8}0WRdX~!}PkaQjHd{_l#@|6gdh}s621E5m;Mjzh!rgJNnEcUP2+~K3N%(;JY=( zqm8-rT3~0nfUQ1mNf9#501P+(Uu>8rM@F0P(L&y*#Z!PPMdt8CC##u5Hq~72yQzYQ z)z{gcNJgg^7@9`GKHfuD@pR?RXe}lKtO^N70PJN(jX^PY9NmvYdnEfh5f%zOW>r$HeT;rhyCnqLPf=_& zeLyXfc!@^8UqD85IlH*j-q-; zRSoyBN$A+=Z56&kwFUS^Pc&R~u~SO0`3vFHBs)brHqk_8api`ruy4+PYq2BmeAgFD zt$hy*c9?b)$FO?QNC&%Pf&=P1>IHf%UdKm{V}Kuw6w~-5LvlIXpfdA!b5tvo5j~Xi z&W*E^-qqt^nTNOY>FAgJpB&}zBCr*!;4p+@Bj8k0s9OCi9XL+KPSvX|{Xbc+S@5x{63LXO(!?9`*FQ>4{RV0$l(7PX1t@Go+p-*5J@T9wZ$X}k%+kGB(t7HN`4DkdKTO;M9$L@3w(oS89)Y8fxe-;jtIcxLBI z1s7uKoE?^xhBCfKH|&F`bYLLfn}hau03UDoscKqBASHjr>W%F*($6hxEWJDb&0PB~^TSu0P|=+O#Qf4|EZbXed-|;1&w#AQA76pf36-fqsb- zPtK!CPI$wArG$QDT&f=nUTVkcCq1x%QG4&)T(?k`_GGbT_bt34fuXNCV04118ZU5h z9&;3!)w_r!dQZHlt}v;JsRlZZhxjUFQvqVmO0d}#&E?FlD6;r=nWLh?#XR#@nA)ApH4 z$4a%nqebR&&dj@O(ouTdajula%CuE@<)|;uTf$X7NW0A{j^wnjihD*s&n|U|yOU%J zoj{K{+-Lg}4$a%2YsvhhqmGh(em_W;S?+D^(Cy)*avEbt?qi2irQG#9;yOWg;e;k8 zkemSvi$9zhU^sNLuJllGc{ok{k+Ck1ZdP(2m@7NSvbm^TA9Rtay4=*P{Qb5uVwnp< z+~1EO68wlu?QB_|q%SFsTDVg8>|_YavN3TmMPS~D#vk!!P~1<-O{CZd)f~^EJ?M zmbFR$5%8|}iO0`?(E_k^fJU%}r{1uauvet^!TL~@3@HMFiou4>HXVIkeuqu}u;)EZ zAt_7XfjK*tLaXj4v%4fXmUL#couX#7{Xz(~3#W7UAo8ZMEpv#Vn^-@b(aLGQ7zq@5 zrvHWYP+1bGaEl`dX@&TlYPtHufOoH+uXQn2u5%?! zC@5mt*1ej|A0nWeH3cR=!UQL%tobUNV(@O$73dnD!BmsC=30s!b!xGKtsD1%E#XKm zb6vg{56>Tj|5sTRU&(S6Jsfl?2G_EX->iDvffofh!> zI;cciWc)J;t$E0FTK^fLSakOnfj>ZHoav)}KRJrH#VgY!L(dyAgL z$cK@!%}2*a4vpU98j;s$(uWA{8eFf!EWP9}ARTuFF&}|w&lpLELFg| zZN_wQ0?dQ%n-_KUpt`u!5s$kOiUs7KUvCRqE)e-_H5v^@<7lw7&d?*;s zNZO&(lxX{G(sQAq%&GLj{gD}YRUA9y2PzL#l&hxkoujx^U?U-S8pcT$a6)i@sL#}= zE3s=wsKheH8dplRpX4Hrlt|Iv!ETt5lV8xfNMcPZ62s@p93_xXHYyd_X-!JqiQP$e zio+3n=p(q%4%LD@e7ysjvQQdBz}%($e3E9}QRb(Wnx;ievp;l><)F1k+fJ8-O_mBa$!`dbp`|k@l7om_t*dLd2A*uXMbo|7NfH3vmE~><)aMWjDA)Ayj6Xh>?{E7RwX8a1{FqJvUCy1HRDcOB8A7_1Su!WY6P!n^hqpEj@k?O(G)3`Iiry1wEwTH<0mLRiwd$-GHf zYLhnUx&-Io32GVdRhFEhZ36iZ*55>AkO^FA_xY^454Ve5d8a^#F{OV?VuGoHcW{~y zu=$h%mZo`3%_KTz^2(!qlL-B7eek2+G|ny#Xmk1DA~e1}QOe(Pu>d5wTZ*XfJSM%U zbe)kpG-IQ!Uu4b-Kbo7914w>d^*lpp*hx4$0*R9Q{BR4F9hf%J><^;2r& zNL+;->E{_|mWV<#lfwxB<>Z4&a`h~F=3!+=Tkg`*l86?p3aAQ2xD5H|Y=Fa5Kdvrw zwA4WD!EoFAM8RjTe<7*GLFTkN-rofjkYb1HL$2ubqn2bN$H~fyr`5Ke`HrK?Y_8>{ zv6MXT8C+PWhow9hQSvsMPmWjt9OsPG5V#aJmy*s?i+_pK;N1dcCj&0wYD$10aJV=n z3qI>jJ;YC4n``woI2lY4fuyS57-3KJE&S?yL(_bq8w~Md-_s1v=qnM$ zP`%c(Ewx|j#Ek)@4?U=9)?W{I_>typ)r0S!#M!t^A?NSEyUBVA zloqCkuq-*hY4uF0x*hC$4#Q%=L{l|8DBAuip%qkaj63%GO%5@@5k)Vg&Aq@%f9mpl zT7LmQNLc3;VJFnz`Ej8p%@y8OG2(BP$upuwU*M&+M_O_^hnO}WLQFe8W$I6`t6l>VgbN*!402oAk$P@Lm}|KWY)@?;31g2UJ!xywl7H#>}pn3?n6o-WW_1|3)Eh zeSg07f0iZF@u0d<0kY6O0GV{2qQ=y8|WhKib%eWvA?! z`&NIVjTh;gw5w_C-QcxMFlb6$3)9PKOQ=5c>H}7_>qF!;p6|8ZQyB`4S36)iSoQ(ph*JBZ443sHt9t>(X+w>A^|Vvt zI-GlYHUvBf1Q*(59M^zx-8+uibanbnhG!X1+~#qT#DpxD&v=8of*aGL)W(}*1D2>R zco4ok9|l-{&Z}`#)i_gw((tCY{%)VnF5Fng6FYe@cIM?T(JlXd=>LX56o1P7Y*www z%jAoL;3_0i3_Oz8`G5QNGm(`>s_zbh#9QN|cwEnn6Z5y1w4y*ElmwxeC{bo#R1E=J z0L&B1-$%snL}~Q^@^w7u=-BuirlO5G_MV83-rzutC6j1tr=w@ZJB zSOI_FbtKu)(srD}wQZnSu3nCOV8`8Zmfn|@p#S%3YB@e#VB(o({jfx>bcbr4jz%x* zey^=5ZxEYbZW8jz>}W$A4U(LBaC;|62||ZuMhCCa#G`{MvEvj}%Na_rtb)_m&sq5F zXJLE-^7tViab*7c&3z7;r_Ydd4tKv){2po+&K^fCe|y@*1zBU?|T)T%h}=9&WxA-P=PFNbDc@VG7#ULq1UMhB@3)DW8N;I zFzKeI3ni4-3t3Ah4~LxrAG6|cSF^i`g^vyX55sje0t;{_g}r3CJEJ7ZzpAiR)zet= zqM&$SCQ}$hLK3eOZP}`_2s_h+76{N@%wQ9tIdGDXRsY%hcMeq}fK2Q=6i=I{WP54a zXO3ju;PxgY)PykYF;);q6@^^5Q7!`a_n8EL5d-L@oCYO>`gtV;+USEeH-IwkSFIo4 z!eeNz6*Cq#bqMQ^M?MT}r(q=Pa@qLn@?#i@G~Oq{YtH+Zdo_0ZHh-ABKY?C9#Xqf! zJmZYM0g9uEPOCB*{Fz$U2Pk1H;kPQGv^d=JksXs#PvScl-reeeGGX<^XXBKG|1U1l zctl`0jltI!EHA-jFJ}ABESEPf_CUHC9{h0mz8I--fc2iN-Ci{pT-$ z&qf9reI(I%l*PG;v)D$?tQ_hwh3ue2GzDH~0Er|E!6Ex{v#U8OvI6>owvXr+=pSm= zaJnnZrGa;@TAiAbUgj0dvN(B%j7h4S&#V?%<-PIRw<8Nb6O__k`924Ey}(2i3w)N0 zFJ5GORd#}Uvg>Bi_X2;U?JYQk?vFloq>_Sila0jD{REOt_Mi?>bu!V3WJym^r#gy4 zMG+rAMs%2A;5A0UZ+;Z(rPNyub^=#RI?fq}4oCn$h%dunCRJYrFSXqW_Q6tWEQON; z>M*{(wmVt!o1vntZ+$|oomex0Xh3Ao8>9{lc}J@{3HD;7+~*C~CB@^zzkA}SBGs`F z#^4B##(5xP=f`oPtRE^Sblq^_q7IsL;3NJ}FWdi6m^UpjZLeFleC(}pA?^<{ORFp} z;00}ZFpr8dZ9SrUX zk)WqdGzoGD4zE^DPlU+3#I-B#7nYdSy>C5Yyv>so&oZy*9)rA)Mt)O%G2 z6=Iu}rY%5eD&-QmIoYk>Fw4iJv3pEaG zm-Vp)uKOECa266CMx*o1LjTk-fm@XePO_Q2}J|WHU60Q6-sdBJ~QiW^EnS%*lim=_w zq!e@mfl$T2}x2eOxA_juH?SqUcsz zLmE;JQdemb>=gi?#WT)k_Vk7~{s(jvj-B|#0N)T5E$DLpD#~`ZF9Q3+*x_0kwDoSu zG@gAGG?d(eH?4!v5sY$LqQ$-%6lW)g=#|sfLQ1ejXe)`@nb^=Z6HJ^PWXsgN;_I`# zdN)HQrZpMjz}Xp6m#`a2AoPOo346|htqkOdz`No1rIImfr+YOJ%-7c*DyFppOL_Tj zNz>wI^~B)ncCR(B^U6dyc$`%tHtnlqzJrieqmn1a=+}Pm;VR2^`TAr9lNEa4VV&}P72MyNDSRLH{*H}NK5(XMY9mjJa+9=@HsJ1V0 z9uyB6X=A61(aSFaSUy-X1x1M$wOC>0Q`k8%JV+LnGPrVLYA&szl!=rgZ4^djt7ihPqL<-wW28u}Xr*BPgKs*lZ!{L z4p1>E)OiZAB`HUOA7UOI3@26-k8?lfILd4ZdKaY&l*pS1vdZ%Sp3z=ayK_AD@&Q9z z+^0Y3W&pb1MzSCDR?rHo-ICV{F_LB7^~U9($-w_NwWU#?JmJbw@sx+fr)Zr=k5&fZ zJe-YoMhT0|H~t!kk<96SCIk}FUm94kz}FidU}(=HbOIdNZN%@9?_;K@l|Eagq#E6n zbDpS$8sQk)8g?5o9a3BU^krmZv?LXDs`?tNE#MYP^5h()7FtL7G||8n!CA)p1-t`@ zZFiAG#dX{Cs)b)C)OhTRrJ73J{~Xs3s41|>sgZX|ffbyR_Q|RWN1c0EfEjpo?H!tC zbzglV79f2OesEY}3Gz!S0qr)Mu@{uB5Sy#~f3oooGQZ&Il^Sq`KOcNkI(`?9maD== Sc-(=2FAY^)l}crJ@c#i0C51fz diff --git a/docs/mindformers/docs/source_zh_cn/guide/deployment.md b/docs/mindformers/docs/source_zh_cn/guide/deployment.md index 0df508c52f..126030a76c 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/deployment.md +++ b/docs/mindformers/docs/source_zh_cn/guide/deployment.md @@ -1,4 +1,4 @@ -# 服务化部署 +# 服务化部署指南 [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/deployment.md) diff --git a/docs/mindformers/docs/source_zh_cn/guide/benchmarks.md b/docs/mindformers/docs/source_zh_cn/guide/evaluation.md similarity index 38% rename from docs/mindformers/docs/source_zh_cn/guide/benchmarks.md rename to docs/mindformers/docs/source_zh_cn/guide/evaluation.md index c1d8864958..35559e0e2b 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/benchmarks.md +++ b/docs/mindformers/docs/source_zh_cn/guide/evaluation.md @@ -1,6 +1,6 @@ -# 评测 +# 评测指南 -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/benchmarks.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/evaluation.md) ## 概览 @@ -8,7 +8,9 @@ 主流的模型评测流程就像考试,通过模型对试卷(评测数据集)的答题正确率来评估模型能力。常见数据集如ceval包含中文的52个不同学科职业考试选择题,主要评估模型的知识量;GSM8K由人类出题者编写的8500道高质量小学数学题组成,主要评估模型的推理能力等。 -由于大模型能力的发展,这几个数据集都面临数据污染和饱和问题,这里仅作为说明。同时业界也涌现了很多非问答式的前沿模型评测方法,这不在本教程的考虑范围内。 +MindSpore Transformers在之前版本,对于部分Legacy架构的模型,适配了Harness评测框架。当前最新适配了AISBench评测框架,理论上支持服务化部署的模型,都能使用AISBench进行评测。 + +## AISBench评测 MindSpore Transformers的服务化评测推荐AISBench Benchmark套件。AISBench Benchmark是基于OpenCompass构建的模型评测工具,兼容OpenCompass的配置体系、数据集结构与模型后端实现,并在此基础上扩展了对服务化模型的支持能力。同时支持30+开源数据集:[AISBench支持的评测数据集](https://gitee.com/aisbench/benchmark/blob/master/doc/users_guide/datasets.md#%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E9%9B%86)。 @@ -21,11 +23,11 @@ MindSpore Transformers的服务化评测推荐AISBench Benchmark套件。AISBenc ![benchmark_illustrate](./images/benchmark_illustrate.png) -## 前期准备 +### 前期准备 前期准备主要完成三件事:安装AISBench评测环境,下载数据集,启动vLLM-MindSpore服务。 -### Step1 安装AISBench评测环境 +#### Step1 安装AISBench评测环境 因为AISBench对torch、transformers都有依赖,但是vLLM-MindSpore的官方镜像中有msadapter包mock的torch,会引起冲突,所以建议为AISBench另起容器安装评测环境。如果坚持以vLLM-MindSpore镜像起容器安装评测环境,需要在启动容器后执行以下几步删除容器内原有torch和transformers: @@ -43,7 +45,7 @@ cd benchmark/ pip3 install -e ./ --use-pep517 ``` -### Step2 数据集下载 +#### Step2 数据集下载 官方文档提供各个数据集下载链接,以ceval为例可在[ceval文档](https://gitee.com/aisbench/benchmark/blob/master/ais_bench/benchmark/configs/datasets/ceval/README.md)中找到下载链接,执行以下命令下载解压数据集到指定路径: @@ -59,15 +61,15 @@ rm ceval-exam.zip 其他数据集下载,可到对应的数据集官方文档中找到下载链接。 -### Step3 启动vLLM-MindSpore服务 +#### Step3 启动vLLM-MindSpore服务 具体启动过程见:[服务化部署教程](./deployment.md),评测支持所有可服务化部署模型。 -## 精度评测流程 +### 精度评测流程 精度评测首先要确定评测的接口和评测的数据集类型,具体根据模型能力和数据集选定。 -### Step1 更改接口配置 +#### Step1 更改接口配置 AISBench支持OpenAI的v1/chat/completions和v1/completions接口,在AISBench中分别对应不同的配置文件。以v1/completions接口为例,以下称general接口,需更改以下文件`ais_bench/benchmark/configs/models/vllm_api/vllm_api_general.py`配置: @@ -98,9 +100,9 @@ models = [ ] ``` -更多具体参数说明查看:[接口配置参数说明](#附录请求接口配置参数说明表)。 +更多具体参数说明查看:[接口配置参数说明](#请求接口配置参数说明表)。 -### Step2 命令行启动评测 +#### Step2 命令行启动评测 确定采用的数据集任务,以ceval为例,采用ceval_gen_5_shot_str数据集任务,命令如下: @@ -108,7 +110,7 @@ models = [ ais_bench --models vllm_api_general --datasets ceval_gen_5_shot_str --debug ``` -#### 参数说明 +参数说明: - `--models`:指定了模型任务接口,即vllm_api_general,对应上一步更改的文件名。此外还有vllm_api_general_chat。 - `--datasets`:指定了数据集任务,即ceval_gen_5_shot_str数据集任务,其中的5_shot指问题会重复四次输入,str是指非chat输出。 @@ -117,11 +119,11 @@ ais_bench --models vllm_api_general --datasets ceval_gen_5_shot_str --debug 评测结束后统计结果会打屏,具体执行结果和日志都会保存在当前路径下的outputs文件夹下,执行异常情况下可以根据日志定位问题。 -## 性能评测流程 +### 性能评测流程 性能与精度评测流程类似,不过更关心各请求各阶段的处理时间,通过精确记录每条请求的发送时间、各阶段返回时间及响应内容,系统地评估模型服务在实际部署环境中的响应延迟(如 TTFT、Token间延迟)、吞吐能力(如 QPS、TPUT)、并发处理能力等关键性能指标。以下以原始数据集gms8k进行性能评测为例。 -### Step1 更改接口配置 +#### Step1 更改接口配置 通过配置服务化后端参数,可以灵活控制请求内容、请求间隔、并发数量等,适配不同评测场景(如低并发延迟敏感型、高并发吞吐优先型等)。配置与精度评测类似,以vllm_api_stream_chat任务为例,在`ais_bench/benchmark/configs/models/vllm_api/vllm_api_stream_chat.py`更改如下配置: @@ -153,15 +155,15 @@ models = [ ] ``` -具体参数说明查看:[接口配置参数说明](#附录请求接口配置参数说明表)。 +具体参数说明查看:[接口配置参数说明](#请求接口配置参数说明表)。 -### Step2 评测命令 +#### Step2 评测命令 ```bash ais_bench --models vllm_api_stream_chat --datasets gsm8k_gen_0_shot_cot_str_perf --summarizer default_perf --mode perf ``` -#### 参数说明 +参数说明: - `--models`:指定了模型任务接口,即vllm_api_stream_chat,对应上一步更改的配置的文件名。 - `--datasets`:指定了数据集任务,即gsm8k_gen_0_shot_cot_str_perf数据集任务,有对应的同名任务文件,其中的gsm8k指用的数据集,0_shot指问题不会重复,str是指非chat输出,perf是指做性能测试。 @@ -170,31 +172,33 @@ ais_bench --models vllm_api_stream_chat --datasets gsm8k_gen_0_shot_cot_str_perf 其它更多的参数配置说明,见[配置说明](https://gitee.com/aisbench/benchmark/blob/master/doc/users_guide/models.md#%E6%9C%8D%E5%8A%A1%E5%8C%96%E6%8E%A8%E7%90%86%E5%90%8E%E7%AB%AF)。 -### 评测结果说明 +#### 评测结果说明 评测结束会输出性能测评结果,结果包括单个推理请求性能输出结果和端到端性能输出结果,参数说明如下: -| 指标 | 全称 | 说明 | -|-----------------------|-----------------------|-------------------------------------------| -| E2EL | End-to-End Latency | 单个请求从发送到接收全部响应的总时延(ms) | -| TTFT | Time To First Token | 首个 Token 返回的时延(ms) | +| 指标 | 全称 | 说明 | +|-----------------------|-----------------------|----------------------------------| +| E2EL | End-to-End Latency | 单个请求从发送到接收全部响应的总时延(ms) | +| TTFT | Time To First Token | 首个 Token 返回的时延(ms) | | TPOT | Time Per Output Token | 输出阶段每个 Token 的平均生成时延(不含首个 Token) | -| ITL | Inter-token Latency | 相邻 Token 间的平均间隔时延(不含首个 Token) | -| InputTokens | / | 请求的输入 Token 数量 | -| OutputTokens | / | 请求生成的输出 Token 数量 | -| OutputTokenThroughput | / | 输出 Token 的吞吐率(Token/s) | -| Tokenizer | / | Tokenizer 编码耗时(ms) | -| Detokenizer | / | Detokenizer 解码耗时(ms) | +| ITL | Inter-token Latency | 相邻 Token 间的平均间隔时延(不含首个 Token) | +| InputTokens | / | 请求的输入 Token 数量 | +| OutputTokens | / | 请求生成的输出 Token 数量 | +| OutputTokenThroughput | / | 输出 Token 的吞吐率(Token/s) | +| Tokenizer | / | Tokenizer 编码耗时(ms) | +| Detokenizer | / | Detokenizer 解码耗时(ms) | - 更多评测任务,如合成随机数据集评测、性能压测,可查看以下文档:[AISBench官方文档](https://gitee.com/aisbench/benchmark/tree/master/doc/users_guide)。 - 更多调优推理性能技巧,可查看以下文档:[推理性能调优](https://docs.qq.com/doc/DZGhMSWFCenpQZWJR)。 - 更多参数说明请看以下文档:[性能测评结果说明](https://gitee.com/aisbench/benchmark/blob/master/doc/users_guide/performance_metric.md)。 -## FAQ +### 附录 + +#### FAQ -### Q:评测结果输出不符合格式,如何使结果输出符合预期? +**Q:评测结果输出不符合格式,如何使结果输出符合预期?** -在某些数据集中,我们可能希望模型的输出符合我们的预期,那我们可以更改prompt。 +在某些数据集中,若希望模型的输出符合预期,那么可以更改prompt。 以ceval的gen_0_shot_str为例,我们想让输出的第一个token就为选择的答案,可更改以下文件下的template: @@ -215,66 +219,329 @@ for _split in ['val']: 其他数据集,也是相应地更改对应文件中的template,构造合适的prompt。 -### Q:不同数据集应该如何配置接口类型和推理长度? +**Q:不同数据集应该如何配置接口类型和推理长度?** 具体取决于模型类型和数据集类型的综合考虑。像reasoning类model就推荐用chat接口,可以使能think,推理长度就要设得长一点;像base模型就用general接口。 - 以Qwen2.5模型评测MMLU数据集为例:从数据集来看,MMLU这类数据集以知识考察为主,就推荐用general接口,同时在数据集任务时不选用带cot的,即不使能思维链。 - 若以QWQ32B模型评测AIME2025这类困难的数学推理题为例:推荐使用chat接口,并设置超长推理长度,使用带cot的数据集任务。 -### 常见报错 +#### 常见报错 -#### 1. 客户端返回HTML数据,包含乱码 +1. 客户端返回HTML数据,包含乱码 -**报错现象**:返回网页HTML数据 -**解决方案**:检查客户端是否开了代理,检查proxy_https、proxy_http环境变量关掉代理。 + **报错现象**:返回网页HTML数据 + **解决方案**:检查客户端是否开了代理,检查proxy_https、proxy_http环境变量关掉代理。 -#### 2. 服务端报 400 Bad Request +2. 服务端报 400 Bad Request -**报错现象**: + **报错现象**: -```plaintext -INFO: 127.0.0.1:53456 - "POST /v1/completions HTTP/1.1" 400 Bad Request -INFO: 127.0.0.1:53470 - "POST /v1/completions HTTP/1.1" 400 Bad Request -``` + ```plaintext + INFO: 127.0.0.1:53456 - "POST /v1/completions HTTP/1.1" 400 Bad Request + INFO: 127.0.0.1:53470 - "POST /v1/completions HTTP/1.1" 400 Bad Request + ``` -**解决方案**:检查客户端接口配置中,请求格式是否正确。 + **解决方案**:检查客户端接口配置中,请求格式是否正确。 -#### 3. 服务端报错404 xxx does not exist +3. 服务端报错404 xxx does not exist -**报错现象**: + **报错现象**: -```plaintext -[serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' param=None code=404 -"POST /v1/chat/completions HTTP/1.1" 404 Not Found -[serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' -``` + ```plaintext + [serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' param=None code=404 + "POST /v1/chat/completions HTTP/1.1" 404 Not Found + [serving_chat.py:135] Error with model object='error' message='The model 'Qwen3-30B-A3B-Instruct-2507' does not exist.' + ``` + + **解决方案**:检查接口配置中的模型路径是否可达。 -**解决方案**:检查接口配置中的模型路径是否可达。 - -## 附录:请求接口配置参数说明表 - -| 参数 | 说明 | -|---------------------|----------------------------------------------------------------------| -| type | 任务接口类型 | -| path | 模型序列化词表文件绝对路径,一般来说就是模型权重文件夹路径 | -| model | 服务端已加载模型名称,依据实际VLLM推理服务拉取的模型名称配置(配置成空字符串会自动获取) | -| request_rate | 请求发送频率,每1/request_rate秒发送1个请求给服务端,小于0.1则一次性发送所有请求 | -| retry | 请求失败重复发送次数 | -| host_ip | 推理服务的IP | -| host_port | 推理服务的端口 | -| max_out_len | 推理服务输出的token的最大数量 | -| batch_size | 请求发送的最大并发数 | -| temperature | 后处理参数,温度系数 | -| top_k | 后处理参数 | -| top_p | 后处理参数 | -| seed | 随机种子 | -| repetition_penalty | 后处理参数,重复性惩罚 | -| ignore_eos | 推理服务输出忽略eos(输出长度一定会达到max_out_len) | - -## 参考资料 +#### 请求接口配置参数说明表 + +| 参数 | 说明 | +|--------------------|---------------------------------------------------| +| type | 任务接口类型 | +| path | 模型序列化词表文件绝对路径,一般来说就是模型权重文件夹路径 | +| model | 服务端已加载模型名称,依据实际VLLM推理服务拉取的模型名称配置(配置成空字符串会自动获取) | +| request_rate | 请求发送频率,每1/request_rate秒发送1个请求给服务端,小于0.1则一次性发送所有请求 | +| retry | 请求失败重复发送次数 | +| host_ip | 推理服务的IP | +| host_port | 推理服务的端口 | +| max_out_len | 推理服务输出的token的最大数量 | +| batch_size | 请求发送的最大并发数 | +| temperature | 后处理参数,温度系数 | +| top_k | 后处理参数 | +| top_p | 后处理参数 | +| seed | 随机种子 | +| repetition_penalty | 后处理参数,重复性惩罚 | +| ignore_eos | 推理服务输出忽略eos(输出长度一定会达到max_out_len) | + +#### 参考资料 关于AISBench的更多教程和使用方式可参考官方资料: - [AISBench官方教程](https://gitee.com/aisbench/benchmark) - [AISBench主要文档](https://gitee.com/aisbench/benchmark/tree/master/doc/users_guide) + +## Harness评测 + +[LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)是一个开源语言模型评测框架,提供60多种标准学术数据集的评测,支持HuggingFace模型评测、PEFT适配器评测、vLLM推理评测等多种评测方式,支持自定义prompt和评测指标,包含loglikelihood、generate_until、loglikelihood_rolling三种类型的评测任务。基于Harness评测框架对MindSpore Transformers进行适配后,支持加载MindSpore Transformers模型进行评测。 + +目前已验证过的模型和支持的评测任务如下表所示: + +| 已验证的模型 | 支持的评测任务 | +|----------|-------------------------------------------| +| Llama3 | gsm8k、ceval-valid、mmlu、cmmlu、race、lambada | +| Llama3.1 | gsm8k、ceval-valid、mmlu、cmmlu、race、lambada | +| Qwen2 | gsm8k、ceval-valid、mmlu、cmmlu、race、lambada | + +### 安装 + +Harness支持pip安装和源码编译安装两种方式。pip安装更简单快捷,源码编译安装更便于调试分析,用户可以根据需要选择合适的安装方式。 + +#### pip安装 + +用户可以执行如下命令安装Harness(推荐使用0.4.4版本): + +```shell +pip install lm_eval==0.4.4 +``` + +#### 源码编译安装 + +用户可以执行如下命令编译并安装Harness: + +```bash +git clone --depth 1 -b v0.4.4 https://github.com/EleutherAI/lm-evaluation-harness +cd lm-evaluation-harness +pip install -e . +``` + +### 使用方式 + +#### 评测前准备 + +1. 创建一个新目录,例如名称为`model_dir`,用于存储模型yaml文件。 +2. 在上个步骤创建的目录中,放置模型推理yaml配置文件(predict_xxx_.yaml)。不同模型的推理yaml配置文件所在目录位置,请参考[模型库](../introduction/models.md)。 +3. 配置yaml文件。如果yaml中模型类、模型Config类、模型Tokenizer类使用了外挂代码,即代码文件在[research](https://gitee.com/mindspore/mindformers/tree/master/research)目录或其他外部目录下,需要修改yaml文件:在相应类的`type`字段下,添加`auto_register`字段,格式为“module.class”(其中“module”为类所在脚本的文件名,“class”为类名。如果已存在,则不需要修改)。 + + 以[predict_llama3_1_8b.yaml](https://gitee.com/mindspore/mindformers/blob/master/research/llama3_1/llama3_1_8b/predict_llama3_1_8b.yaml)配置为例,对其中的部分配置项进行如下修改: + + ```yaml + run_mode: 'predict' # 设置推理模式 + load_checkpoint: 'model.ckpt' # 权重路径 + processor: + tokenizer: + vocab_file: "tokenizer.model" # tokenizer路径 + type: Llama3Tokenizer + auto_register: llama3_tokenizer.Llama3Tokenizer + ``` + + 关于每个配置项的详细说明请参考[配置文件说明](../feature/configuration.md)。 +4. 如果使用`ceval-valid`、`mmlu`、`cmmlu`、`race`、`lambada`数据集进行评测,需要将`use_flash_attention`设置为`False`,以`predict_llama3_1_8b.yaml`为例,修改yaml如下: + + ```yaml + model: + model_config: + # ... + use_flash_attention: False # 设置为False + # ... + ``` + +#### 评测样例 + +执行脚本[run_harness.sh](https://gitee.com/mindspore/mindformers/blob/master/toolkit/benchmarks/run_harness.sh)进行评测。 + +run_harness.sh脚本参数配置如下表: + +| 参数 | 类型 | 参数介绍 | 是否必须 | +|-------------------|-----|--------------------------------------------------------------------------------------------------|-----------| +| `--register_path` | str | 外挂代码所在目录的绝对路径。比如[research](https://gitee.com/mindspore/mindformers/tree/master/research)目录下的模型目录 | 否(外挂代码必填) | +| `--model` | str | 需设置为 `mf` ,对应为MindSpore Transformers评估策略 | 是 | +| `--model_args` | str | 模型及评估相关参数,见下方模型参数介绍 | 是 | +| `--tasks` | str | 数据集名称。可传入多个数据集,使用逗号(,)分隔 | 是 | +| `--batch_size` | int | 批处理样本数 | 否 | + +其中,model_args参数配置如下表: + +| 参数 | 类型 | 参数介绍 | 是否必须 | +|----------------|------|--------------------|------| +| `pretrained` | str | 模型目录路径 | 是 | +| `max_length` | int | 模型生成的最大长度 | 否 | +| `use_parallel` | bool | 开启并行策略(执行多卡评测必须开启) | 否 | +| `tp` | int | 张量并行数 | 否 | +| `dp` | int | 数据并行数 | 否 | + +Harness评测支持单机单卡、单机多卡、多机多卡场景,每种场景的评测样例如下: + +1. 单卡评测样例 + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir \ + --tasks gsm8k + ``` + +2. 多卡评测样例 + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=4,dp=1 \ + --tasks ceval-valid \ + --batch_size BATCH_SIZE WORKER_NUM + ``` + + - `BATCH_SIZE`为模型批处理样本数; + - `WORKER_NUM`为使用计算卡的总数。 + +3. 多机多卡评测样例 + + 节点0(主节点)命令: + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ + --tasks lambada \ + --batch_size 2 8 4 192.168.0.0 8118 0 output/msrun_log False 300 + ``` + + 节点1(副节点)命令: + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ + --tasks lambada \ + --batch_size 2 8 4 192.168.0.0 8118 1 output/msrun_log False 300 + ``` + + 节点n(副节点)命令: + + ```shell + source toolkit/benchmarks/run_harness.sh \ + --register_path mindformers/research/llama3_1 \ + --model mf \ + --model_args pretrained=model_dir,use_parallel=True,tp=8,dp=1 \ + --tasks lambada \ + --batch_size BATCH_SIZE WORKER_NUM LOCAL_WORKER MASTER_ADDR MASTER_PORT NODE_RANK output/msrun_log False CLUSTER_TIME_OUT + ``` + + - `BATCH_SIZE`为模型批处理样本数; + - `WORKER_NUM`为所有节点中使用计算卡的总数; + - `LOCAL_WORKER`为当前节点中使用计算卡的数量; + - `MASTER_ADDR`为分布式启动主节点的ip; + - `MASTER_PORT`为分布式启动绑定的端口号; + - `NODE_RANK`为当前节点的rank id; + - `CLUSTER_TIME_OUT`为分布式启动的等待时间,单位为秒。 + + 多机多卡评测需要分别在不同节点运行脚本,并将参数MASTER_ADDR设置为主节点的ip地址, 所有节点设置的ip地址相同,不同节点之间仅参数NODE_RANK不同。 + +### 查看评测结果 + +执行评测命令后,评测结果将会在终端打印出来。以gsm8k为例,评测结果如下,其中Filter对应匹配模型输出结果的方式,n-shot对应数据集内容格式,Metric对应评测指标,Value对应评测分数,Stderr对应分数误差。 + +| Tasks | Version | Filter | n-shot | Metric | | Value | | Stderr | +|-------|--------:|------------------|-------:|-------------|---|--------|---|--------| +| gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.5034 | ± | 0.0138 | +| | | strict-match | 5 | exact_match | ↑ | 0.5011 | ± | 0.0138 | + +### FAQ + +1. 使用Harness进行评测,在加载HuggingFace数据集时,报错`SSLError`: + + 参考[SSL Error报错解决方案](https://stackoverflow.com/questions/71692354/facing-ssl-error-with-huggingface-pretrained-models)。 + + 注意:关闭SSL校验存在风险,可能暴露在中间人攻击(MITM)下。仅建议在测试环境或你完全信任的连接里使用。 + +## 训练后模型进行评测 + +模型在训练过程中或训练结束后,一般会将训练得到的模型权重去跑评测任务,来验证模型的训练效果。本章节介绍了从训练后到评测前的必要步骤,包括: + +1. 训练后的分布式权重的处理(单卡训练可忽略此步骤); +2. 基于训练配置编写评测使用的推理配置文件; +3. 运行简单的推理任务验证上述步骤的正确性; +4. 进行评测任务。 + +### 分布式权重合并 + +训练后产生的权重如果是分布式的,需要先将已有的分布式权重合并成完整权重后,再通过在线切分的方式进行权重加载完成推理任务。使用MindSpore Transformers提供的[safetensors权重合并脚本](https://gitee.com/mindspore/mindformers/blob/master/toolkit/safetensors/unified_safetensors.py),合并后的权重格式为完整权重。 + +可以按照以下方式填写参数: + +```shell +python toolkit/safetensors/unified_safetensors.py \ + --src_strategy_dirs src_strategy_path_or_dir \ + --mindspore_ckpt_dir mindspore_ckpt_dir\ + --output_dir output_dir \ + --file_suffix "1_1" \ + --filter_out_param_prefix "adam_" +``` + +脚本参数说明: + +- src_strategy_dirs:源权重对应的分布式策略文件路径,通常在启动训练任务后默认保存在 output/strategy/ 目录下。分布式权重需根据以下情况填写: + + 1. 源权重开启了流水线并行:权重转换基于合并的策略文件,填写分布式策略文件夹路径。脚本会自动将文件夹内的所有 ckpt_strategy_rank_x.ckpt 文件合并,并在文件夹下生成 merged_ckpt_strategy.ckpt。如果已经存在 merged_ckpt_strategy.ckpt,可以直接填写该文件的路径。 + 2. 源权重未开启流水线并行:权重转换可基于任一策略文件,填写任意一个 ckpt_strategy_rank_x.ckpt 文件的路径即可。 + + 注意:如果策略文件夹下已存在 merged_ckpt_strategy.ckpt 且仍传入文件夹路径,脚本会首先删除旧的 merged_ckpt_strategy.ckpt,再合并生成新的 merged_ckpt_strategy.ckpt 以用于权重转换。因此,请确保该文件夹具有足够的写入权限,否则操作将报错。 + +- mindspore_ckpt_dir:分布式权重路径,请填写源权重所在文件夹的路径,源权重应按 model_dir/rank_x/xxx.safetensors 格式存放,并将文件夹路径填写为 model_dir。 +- output_dir:目标权重的保存路径,默认值为 `/new_llm_data/******/ckpt/nbg3_31b/tmp`,即目标权重将放置在 `/new_llm_data/******/ckpt/nbg3_31b/tmp` 目录下。 +- file_suffix:目标权重文件的命名后缀,默认值为 "1_1",即目标权重将按照 *1_1.safetensors 格式查找。 +- has_redundancy:合并的源权重是否是冗余的权重,默认为 True。 +- filter_out_param_prefix:合并权重时可自定义过滤掉部分参数,过滤规则以前缀名匹配。如优化器参数"adam_"。 +- max_process_num:合并最大进程数。默认值:64。 + +### 推理配置开发 + +在完成权重文件的合并后,需依据训练配置文件开发对应的推理配置文件。 + +以Qwen3为例,基于[Qwen3推理配置](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/predict_qwen3.yaml)修改[Qwen3训练配置](https://gitee.com/mindspore/mindformers/blob/master/configs/qwen3/finetune_qwen3.yaml): + +Qwen3训练配置主要修改点包括: + +- run_mode的值修改为"predict"。 +- 添加pretrained_model_dir:Hugging Face或ModelScope的模型目录路径,放置模型配置、Tokenizer等文件。 +- parallel_config只保留data_parallel和model_parallel。 +- model_config中只保留compute_dtype、layernorm_compute_dtype、softmax_compute_dtype、rotary_dtype、params_dtype,和推理配置保持精度一致。 +- parallel模块中,只保留parallel_mode和enable_alltoall,parallel_mode的值修改为"MANUAL_PARALLEL"。 + +### 推理功能验证 + +在权重和配置文件都准备好的情况下,使用单条数据输入进行推理,检查输出内容是否符合预期逻辑,参考[推理文档](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/inference.md),拉起推理任务。 + +例如: + +```shell +python run_mindformer.py \ +--config configs/qwen3/predict_qwen3.yaml \ +--run_mode predict \ +--use_parallel False \ +--predict_data '帮助我制定一份去上海的旅游攻略' +``` + +如果输出内容出现乱码或者不符合预期,需要定位精度问题。 + +1. 检查模型配置正确性 + + 确认模型结构与训练配置一致。参考训练配置模板使用教程,确保配置文件符合规范,避免因参数错误导致推理异常。 + +2. 验证权重加载完整性 + + 检查模型权重文件是否完整加载,确保权重名称与模型结构严格匹配。参考新模型权重转换适配教程,查看权重日志即权重切分方式是否正确,避免因权重不匹配导致推理错误。 + +3. 定位推理精度问题 + + 若模型配置与权重加载均无误,但推理结果仍不符合预期,需进行精度比对分析,参考推理精度比对文档,逐层比对训练与推理的输出差异,排查潜在的数据预处理、计算精度或算子问题。 + +### 使用AISBench进行评测 + +参考AISBench评测章节,使用AISBench工具进行评测,验证模型精度。 \ No newline at end of file diff --git a/docs/mindformers/docs/source_zh_cn/guide/inference.md b/docs/mindformers/docs/source_zh_cn/guide/inference.md index 679bb45112..da743f1364 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/inference.md +++ b/docs/mindformers/docs/source_zh_cn/guide/inference.md @@ -1,4 +1,4 @@ -# 推理 +# 推理指南 [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/inference.md) diff --git a/docs/mindformers/docs/source_zh_cn/guide/llm_training.md b/docs/mindformers/docs/source_zh_cn/guide/llm_training.md index ea8bfc6696..2a125bc038 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/llm_training.md +++ b/docs/mindformers/docs/source_zh_cn/guide/llm_training.md @@ -1,4 +1,4 @@ -# MindSpore Transformers LLM训练指南 +# 训练指南 [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/llm_training.md) diff --git a/docs/mindformers/docs/source_zh_cn/guide/pre_training.md b/docs/mindformers/docs/source_zh_cn/guide/pre_training.md index 4d7e4a7f60..59d2196b8d 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/pre_training.md +++ b/docs/mindformers/docs/source_zh_cn/guide/pre_training.md @@ -1,4 +1,4 @@ -# 预训练 +# 预训练实践 [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/pre_training.md) diff --git a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md index 77f12361c9..8a9341fdb9 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md @@ -1,4 +1,4 @@ -# SFT微调 +# 监督微调实践 [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md) @@ -6,11 +6,11 @@ SFT(Supervised Fine-Tuning,监督微调)采用有监督学习思想,是指在预训练模型的基础上,通过调整部分或全部参数,使模型更适应特定任务或数据集的过程。 -MindSpore Transformers支持全参微调和LoRA高效微调两种SFT微调方式。全参微调是指在训练过程中对所有参数进行更新,适用于大规模数据精调,能获得最优的任务适应能力,但需要的计算资源较大。LoRA高效微调在训练过程中仅更新部分参数,相比全参微调显存占用更少、训练速度更快,但在某些任务中的效果不如全参微调。 +MindSpore Transformers支持全参微调和LoRA高效微调两种监督微调方式。全参微调是指在训练过程中对所有参数进行更新,适用于大规模数据精调,能获得最优的任务适应能力,但需要的计算资源较大。LoRA高效微调在训练过程中仅更新部分参数,相比全参微调显存占用更少、训练速度更快,但在某些任务中的效果不如全参微调。 -## SFT微调的基本流程 +## 监督微调的基本流程 -结合实际操作,可以将SFT微调分解为以下步骤: +结合实际操作,可以将监督微调分解为以下步骤: ### 1. 权重准备 diff --git a/docs/mindformers/docs/source_zh_cn/index.rst b/docs/mindformers/docs/source_zh_cn/index.rst index 6fbff1d104..c8d7785814 100644 --- a/docs/mindformers/docs/source_zh_cn/index.rst +++ b/docs/mindformers/docs/source_zh_cn/index.rst @@ -22,40 +22,12 @@ MindSpore Transformers的开源仓库地址为 `Gitee | MindSpore/mindformers - - - - - - - - - - - - - - - - - - - - - - - +- `训练指南 `_ +- `预训练实践 `_ +- `监督微调实践 `_ +- `推理指南 `_ +- `服务化部署指南 `_ +- `评测指南 `_ 代码仓地址: @@ -128,10 +100,6 @@ MindSpore Transformers功能特性说明 - 推理功能 - - `评测 `_ - - 支持使用第三方开源评测框架和数据集进行大模型榜单评测。 - - `量化 `_ 集成 MindSpore Golden Stick 工具组件,提供统一量化推理流程开箱即用。 @@ -199,7 +167,7 @@ FAQ guide/supervised_fine_tuning guide/inference guide/deployment - guide/benchmarks + guide/evaluation .. toctree:: :glob: -- Gitee