diff --git a/docs/msadapter/docs/source_zh_cn/api.rst b/docs/msadapter/docs/source_zh_cn/api.rst index 9137473e6454f84e80808f130286f631b3c6c414..21ac65550d65147a804db81c9f956da29974d160 100644 --- a/docs/msadapter/docs/source_zh_cn/api.rst +++ b/docs/msadapter/docs/source_zh_cn/api.rst @@ -129,56 +129,6 @@ Dataloader中的pin_memory参数仅支持设置为False return data.pin_memory(device) TypeError: pin_memory() takes 1 positional argument but 2 were given -不支持tensor.backward()操作 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -示例代码: - -.. code-block:: - - import torch - x = torch.randn(2,) - x.backward() - -报错信息如下: - -.. code-block:: - - Traceback (most recent call last): - File "/path/to/your/demo.py", line XX, in - x.backward() - File "/path/to/your/torch/_tensor.py", line 325, in backward - raise ValueError('not support Tensor.backward yet.') - ValueError: not support Tensor.backward yet. - -不支持to(device)操作 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -示例代码: - -.. code-block:: - - import torch - x = torch.randn(2,) - device = "cuda" - x.to(device) - -报错信息如下: - -.. code-block:: - - Traceback (most recent call last): - File "/path/to/your/demo.py", line XX, in - x.to(device) - File "/path/to/your/mindspore/common/tensor.py", line 3018, in to - return self if self.dtype == dtype else self._to(dtype) - TypeError: _to(): argument 'dtype' (position 1) must be mstype, not str. - - ---------------------------------------------------- - - C++ Call Stack: (For framework developers) - ---------------------------------------------------- - mindspore/ccsrc/pynative/op_function/converter.cc:657 Parse - MindSpore导出的ckpt文件无法被直接加载到PyTorch模型中 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/constraints.md b/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/constraints.md index c1fd89fa8b419fdba25d9c4fb5c42f8d6bcda204..a4b718d54f2138d4485d131b5aee77cf0d0461eb 100755 --- a/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/constraints.md +++ b/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/constraints.md @@ -4,63 +4,12 @@ 本文介绍MindSpore和PyTorch实现上的主要区别: -1. 微分机制 -2. Dispatch机制 -3. Storage机制 -4. 静态编译机制 +1. Dispatch机制 +2. Storage机制 +3. 静态编译机制 这些机制的不同,导致用户需要手动修改代码,或者部分功能暂时不支持。 -## 微分机制 - -PyTorch 使用的是动态计算图,运算在代码执行时立即计算,正反向计算图在每次前向传播时动态构建。PyTorch微分为命令式反向微分,更符合面对对象编程的使用习惯。 - -MindSpore使用函数式自动微分的设计理念,提供更接近于数学语义的自动微分接口。MindSpore的微分可以简单理解为函数包装了一个模型,接口是一个函数,而PyTorch的接口是模型。 - -详情参考[MindSpore函数式微分](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/autograd.html)。 - -由于两者底层设计的不同,MSAdapter不支持以下相关接口: - -- torch.Tensor.backward() -- torch.autograd.* - -部分相关接口PyTorch原始用法如下: - -```python -loss.backward() -``` - -```python -loss.backward(gradient_tensor) -``` - -由于计算图的实现差异(包含命令式微分与函数式微分的差异),**原始正向计算以及反向计算需要用户手动进行如下修改**: - -原始PyTorch写法: - -```python -outputs = model(inputs) -loss = criterion(outputs, labels) -loss.backward() -``` - -MSAdapter修改后的写法: - -```python -import mindspore -def forward_fn(inputs, labels): - outputs = model(inputs) - loss = criterion(outputs, labels) - return loss -grad_fn = mindspore.value_and_grad(forward_fn, None, weights=model.trainable_params()) - -loss, grads = grad_fn(inputs, labels) -``` - -mindspore.value_and_grad的第二个参数为grad_position,定义对于哪些输入进行求导。当用户设置为None的时候必须定义weights。详细定义可以参考[mindspore.value_and_grad](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore/mindspore.value_and_grad.html)。 - -MSAdapter只需要调用grad_fn就能够一次性调用正向计算加反向计算。 - ## Dispatch机制 `torch.dispatch` 是 PyTorch 中用于拦截和自定义张量操作的机制。它允许开发者在张量运算被调用时插入自己的逻辑,无论是为了调试、性能优化,还是实现特定领域的功能增强。这个特性是在 PyTorch 1.7中引入的,并且为高级用户提供了强大的能力来扩展PyTorch的功能。 diff --git a/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/install.md b/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/install.md index c478bbeaa5e3fc7195ed3b72f9e45e6233b90fb9..fd4522993c69b62f164698593b01249938de22a1 100755 --- a/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/install.md +++ b/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/install.md @@ -18,8 +18,8 @@ pip install mindspore - 如果用户希望直接使用源码,设置如下环境环境变量: ``` bash - git clone https://openi.pcl.ac.cn/OpenI/MSAdapter - export $PYTHONPATH=your_workspace/MSAdapter/mindtorch + export PYTHONPATH=${your_workspace}/msadapter/:$PYTHONPATH + export PYTHONPATH=${your_workspace}/msadapter/msa_thirdparty:$PYTHONPATH ``` 其中,your_workspace是git clone下载的目录。此方法不会影响用户的PyTorch使用。 @@ -27,9 +27,12 @@ pip install mindspore - 如果用户希望以Python安装包编译的形式使用,进入MSAdapter目录,进行源码编译操作: ```bash - git clone https://openi.pcl.ac.cn/OpenI/MSAdapter - cd MSAdapter - pip install . + git clone https://gitee.com/mindspore/msadapter.git + cd msadapter + bash scripts/build.sh + pip install ${your_workspace}/msadapter/dist/*.whl + export PYTHONPATH=/*/site-packages/msa_thirdparty:$PYTHONPATH + # /*/site-packages 指python环境下的安装包路径,可以使用pip show msadapter获取。 ``` 直接安装会覆盖原始PyTorch的使用,如果希望同时使用PyTorch和MSAdapter,可以考虑直接使用源码。 diff --git a/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/quick_start.md b/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/quick_start.md index 7d5449c5632261fdbce4aff5489e6fdeae358ef2..fa53fcc1f527c279d2dc4d950a5241e4526b271e 100755 --- a/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/quick_start.md +++ b/docs/msadapter/docs/source_zh_cn/msadapter_user_guide/quick_start.md @@ -6,13 +6,8 @@ 模型适配详细步骤如下: -1. 导入依赖包 -2. 模型定义 -3. 参数解析 -4. 数据下载与预处理 -5. 模型构建 -6. 定义损失函数 -7. 训练 +1. 使能MSAdapter +2. 训练 ## PyTorch用例 @@ -86,219 +81,35 @@ if __name__ == "__main__": 接下来,对应PyTorch的完整流程,说明如何使用MSAdapter完成相同的任务。 -### 1. 导入依赖包 +### 1. 使能MSAdapter -MSAdapter已经兼容PyTorch的各类子模块,无需修改。 +MSAdapter已经兼容PyTorch的各类子模块,这里使能MSAdapter有两种方式第一种是微调脚本切换后端,第二种是环境变量切换。 -```python -import torch -import torch.nn as nn -import torch.optim as optim -from torch.utils.data import DataLoader -from torchvision import datasets, transforms -``` - -### 2. 模型定义 - -MSAdapter已经兼容torch.nn.Module,无需修改。 - -```python -class ToyModel(nn.Module): - def __init__(self): - super(ToyModel, self).__init__() - self.net1 = nn.Linear(784, 64) - self.relu = nn.ReLU() - self.net2 = nn.Linear(64, 10) - def forward(self, x): - return self.net2(self.relu(self.net1(x))) -``` - -### 3. 参数解析 - -argparse是常规Python,与深度学习无关,无需修改。 - -### 4. 数据下载与预处理 - -MSAdapter已经兼容基础数据集相关接口,无需修改。 - -```python -def data_process(inputs, labels): - inputs = inputs.view(inputs.size(0), -1) - return inputs, labels - -# 预处理函数 -transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize((0.5,), (0.5,)) -]) -train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform) -# 加载数据集 -train_loader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True) -``` - -### 5. 模型构建 - -MSAdapter在torch.nn.Module.to()调用上与PyTorch有差别。 - -```python -model = ToyModel().to('cuda') -``` - -由于MSAdapter暂时不支持torch.nn.Module.to接口,需要转换为如下方式,MSAdapter默认将模型放置于NPU上。若用户希望将模型或者张量搬运至CPU,则需要调用.cpu()接口。 - -修改如下: - -```python -model = ToyModel() -``` - -### 6. 定义损失函数 - -MSAdapter的损失函数使用方式与PyTorch一致,无需修改。 - -```python -criterion = nn.CrossEntropyLoss() -optimizer = optim.Adam(model.parameters(), lr=args.learning_rate) -``` - -### 7. 训练 - -MSAdapter在torch.Tensor.to()、正向计算、反向微分计算的调用上与PyTorch有差别,需要修改代码。 - -```python -for epoch in range(args.epochs): - model.train() - for inputs, labels in train_loader: - inputs, labels = data_process(inputs, labels) - inputs, labels = inputs.to('cuda'), labels.to('cuda') # Tensor.to()问题 - optimizer.zero_grad() - outputs = model(inputs) # 前向调用不同 - loss = criterion(outputs, labels).to('cuda') # Tensor.to()问题 - loss.backward() # 反向调用不同 - optimizer.step() - step += 1 -``` - -#### torch.Tensor.to() - -与步骤5类似,由于MSAdapter暂时不支持torch.nn.Tensor.to接口,需要转换为如下方式。注意:MSAdapter默认将模型放置于NPU上。 - -```python -inputs, labels = inputs.to('cuda'), labels.to('cuda') -loss = criterion(outputs, labels).to('cuda') -``` - -修改如下: - -```python -# inputs, labels无需显示指定NPU -loss = criterion(outputs, labels) -``` - -#### 前向与反向计算 +#### 1.1 微调脚本切换后端 -由于MSAdapter使用了函数式微分,正向反向计算均要调用函数,所以需要将PyTorch模型封装为函数。用户除了修改代码,还需要导入MindSpore。 +这种方式的前提是我们已经安装了MSAdapter,具体安装流程可以参考安装章节。需要在脚本前加一行,来切换后端,具体修改如下: ```python -outputs = model(inputs) # 前向调用不同 -loss = criterion(outputs, labels).to('cuda') # Tensor.to()问题不在此重述 -loss.backward() # 反向调用不同 -``` - -修改如下: - -1. 预定义正向计算函数和反向计算函数 - - ```python - import mindspore - def forward_fn(inputs, labels): - outputs = model(inputs) - loss = criterion(outputs, labels) - return loss - grad_fn = mindspore.value_and_grad(forward_fn, None, weights=model.trainable_params()) - ``` - -2. 替换原始的PyTorch计算过程 - - ```python - loss, grads = grad_fn(inputs, labels) - ``` - -## MSAdapter适配后代码 - -此处提供MSAdapter可运行的代码: - -```python -import argparse +import msadapter # 改为mindspore后端执行 import torch -import torch_npu import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader from torchvision import datasets, transforms -import mindspore - -class ToyModel(nn.Module): - def __init__(self): - super(ToyModel, self).__init__() - self.net1 = nn.Linear(784, 64) - self.relu = nn.ReLU() - self.net2 = nn.Linear(64, 10) - - def forward(self, x): - return self.net2(self.relu(self.net1(x))) - -def parse_args(): - parser = argparse.ArgumentParser(description="command line arguments") - parser.add_argument('--batch_size', type=int, default=64) - parser.add_argument('--epochs', type=int, default=1) - parser.add_argument('--learning_rate', type=float, default=0.0001) - return parser.parse_args() - -def data_process(inputs, labels): - inputs = inputs.view(inputs.size(0), -1) - return inputs, labels - -def main(): - # 获取传参 - args = parse_args() - transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize((0.5,), (0.5,)) - ]) - train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform) - # 加载数据集 - train_loader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True) - # 将模型转移到NPU上 - model = ToyModel() - # 定义损失函数 - criterion = nn.CrossEntropyLoss() - optimizer = optim.Adam(model.parameters(), lr=args.learning_rate) - step = 0 +``` - def forward_fn(inputs, labels): - outputs = model(inputs) - loss = criterion(outputs, labels) - return loss - grad_fn = mindspore.value_and_grad(forward_fn, None, weights=model.trainable_params()) +#### 1.2 环境变量切换 - for epoch in range(args.epochs): - model.train() - for inputs, labels in train_loader: - # 数据预处理,将数据集的数据转成需要的shape - inputs, labels = data_process(inputs, labels) +这种方式要求我们下载源码,将源码路径加入到环境变量中来使能MSAdapter流程。 - optimizer.zero_grad() - loss, grads = grad_fn(inputs, labels) - optimizer.step() +```bash +export PYTHONPATH=${work_dir}/msadapter/:$PYTHONPATH +export PYTHONPATH=${work_dir}/msadapter/msa_thirdparty:$PYTHONPATH +``` - # 添加每个step的打印,用户可自行修改 - print(f"step = {step}, loss : {loss}") - step += 1 +### 2. 训练 -if __name__ == "__main__": - main() -``` +当前MSAdapter微分机制这部分已与torch对齐,无需用户去做调整。根据使能MSAdapter章节的修改即可拉起训练。 ## loss对比 diff --git a/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_nn_functional.md b/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_nn_functional.md index 3f2d6ae0e71ba428c77b68a29e1a487f78cf8793..9d32cee54981468bc1b2ea1b18bd9a1531999143 100644 --- a/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_nn_functional.md +++ b/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_nn_functional.md @@ -8,7 +8,7 @@ |-------|-------|---------| |[conv1d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.conv1d.html)|Beta|支持数据类型:fp16、fp32| |[conv2d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.conv2d.html)|Beta|支持数据类型:bf16、fp16、fp32| -|[conv3d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.conv3d.html)|Not Support|N/A| +|[conv3d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.conv3d.html)|Stable|N/A| |[conv_transpose1d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.conv_transpose1d.html)|Not Support|N/A| |[conv_transpose2d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.conv_transpose2d.html)|Demo|不支持CPU平台| |[conv_transpose3d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.conv_transpose3d.html)|Not Support|N/A| @@ -24,7 +24,7 @@ |[avg_pool3d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.avg_pool3d.html)|Not Support|N/A| |[max_pool1d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.max_pool1d.html)|Beta|N/A| |[max_pool2d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.max_pool2d.html)|Stable|N/A| -|[max_pool3d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.max_pool3d.html)|Not Support|N/A| +|[max_pool3d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.max_pool3d.html)|Stable|N/A| |[max_unpool1d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.max_unpool1d.html)|Not Support|N/A| |[max_unpool2d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.max_unpool2d.html)|Not Support|N/A| |[max_unpool3d](https://pytorch.org/docs/2.1/generated/torch.nn.functional.max_unpool3d.html)|Not Support|N/A| @@ -44,7 +44,7 @@ |API名称|API状态|限制与说明| |-------|-------|---------| -|[scaled_dot_product_attention](https://pytorch.org/docs/2.1/generated/torch.nn.functional.scaled_dot_product_attention.html)|Not Support|N/A| +|[scaled_dot_product_attention](https://pytorch.org/docs/2.1/generated/torch.nn.functional.scaled_dot_product_attention.html)|Beta|N/A| ## Non-linear activation functions @@ -89,7 +89,7 @@ |[instance_norm](https://pytorch.org/docs/2.1/generated/torch.nn.functional.instance_norm.html)|Not Support|N/A| |[layer_norm](https://pytorch.org/docs/2.1/generated/torch.nn.functional.layer_norm.html)|Stable|支持数据类型:bf16、fp16、fp32| |[local_response_norm](https://pytorch.org/docs/2.1/generated/torch.nn.functional.local_response_norm.html)|Not Support|N/A| -|[rms_norm](https://pytorch.org/docs/2.1/generated/torch.nn.functional.rms_norm.html)|Not Support|N/A| +|[rms_norm](https://pytorch.org/docs/2.1/generated/torch.nn.functional.rms_norm.html)|Beta|N/A| |[normalize](https://pytorch.org/docs/2.1/generated/torch.nn.functional.normalize.html)|Beta|不支持out出参;支持数据类型:bf16、fp16、fp32| ## Linear functions diff --git a/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_tensor.md b/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_tensor.md index 44c473b9d6039e7d9f6def6b64396b5e5d2e4ccd..bbcc6e55bfeee5baf366714dacfcdb3ce2324e38 100644 --- a/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_tensor.md +++ b/docs/msadapter/docs/source_zh_cn/note/pytorch_api_supporting_tensor.md @@ -9,20 +9,20 @@ |[Tensor.new_tensor](https://pytorch.org/docs/2.1/generated/torch.Tensor.new_tensor.html)|Not Support|N/A| |[Tensor.new_full](https://pytorch.org/docs/2.1/generated/torch.Tensor.new_full.html)|Not Support|N/A| |[Tensor.new_empty](https://pytorch.org/docs/2.1/generated/torch.Tensor.new_empty.html)|Not Support|N/A| -|[Tensor.new_ones](https://pytorch.org/docs/2.1/generated/torch.Tensor.new_ones.html)|Not Support|N/A| -|[Tensor.new_zeros](https://pytorch.org/docs/2.1/generated/torch.Tensor.new_zeros.html)|Not Support|N/A| +|[Tensor.new_ones](https://pytorch.org/docs/2.1/generated/torch.Tensor.new_ones.html)|Beta|N/A| +|[Tensor.new_zeros](https://pytorch.org/docs/2.1/generated/torch.Tensor.new_zeros.html)|Beta|N/A| |[Tensor.is_cuda](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_cuda.html)|Not Support|N/A| |[Tensor.is_quantized](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_quantized.html)|Not Support|N/A| |[Tensor.is_meta](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_meta.html)|Not Support|N/A| |[Tensor.device](https://pytorch.org/docs/2.1/generated/torch.Tensor.device.html)|Not Support|N/A| -|[Tensor.grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.grad.html)|Not Support|N/A| +|[Tensor.grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.grad.html)|Beta|N/A| |[Tensor.ndim](https://pytorch.org/docs/2.1/generated/torch.Tensor.ndim.html)|Not Support|N/A| -|[Tensor.real](https://pytorch.org/docs/2.1/generated/torch.Tensor.real.html)|Not Support|N/A| -|[Tensor.imag](https://pytorch.org/docs/2.1/generated/torch.Tensor.imag.html)|Not Support|N/A| +|[Tensor.real](https://pytorch.org/docs/2.1/generated/torch.Tensor.real.html)|Beta|N/A| +|[Tensor.imag](https://pytorch.org/docs/2.1/generated/torch.Tensor.imag.html)|Beta|N/A| |[Tensor.nbytes](https://pytorch.org/docs/2.1/generated/torch.Tensor.nbytes.html)|Not Support|N/A| |[Tensor.itemsize](https://pytorch.org/docs/2.1/generated/torch.Tensor.itemsize.html)|Not Support|N/A| |[Tensor.abs](https://pytorch.org/docs/2.1/generated/torch.Tensor.abs.html)|Stable|支持数据类型:bf16、fp16、fp32、fp64、uint8、int8、int16、int32、int64、bool| -|[Tensor.abs_](https://pytorch.org/docs/2.1/generated/torch.Tensor.abs_.html)|Not Support|N/A| +|[Tensor.abs_](https://pytorch.org/docs/2.1/generated/torch.Tensor.abs_.html)|Beta|N/A| |[Tensor.absolute](https://pytorch.org/docs/2.1/generated/torch.Tensor.absolute.html)|Stable|支持数据类型:bf16、fp16、fp32、uint8、int8、int16、int32、int64| |[Tensor.absolute_](https://pytorch.org/docs/2.1/generated/torch.Tensor.absolute_.html)|Not Support|N/A| |[Tensor.acos](https://pytorch.org/docs/2.1/generated/torch.Tensor.acos.html)|Not Support|N/A| @@ -34,9 +34,9 @@ |[Tensor.addbmm](https://pytorch.org/docs/2.1/generated/torch.Tensor.addbmm.html)|Not Support|N/A| |[Tensor.addbmm_](https://pytorch.org/docs/2.1/generated/torch.Tensor.addbmm_.html)|Not Support|N/A| |[Tensor.addcdiv](https://pytorch.org/docs/2.1/generated/torch.Tensor.addcdiv.html)|Not Support|N/A| -|[Tensor.addcdiv_](https://pytorch.org/docs/2.1/generated/torch.Tensor.addcdiv_.html)|Not Support|N/A| +|[Tensor.addcdiv_](https://pytorch.org/docs/2.1/generated/torch.Tensor.addcdiv_.html)|Beta|N/A| |[Tensor.addcmul](https://pytorch.org/docs/2.1/generated/torch.Tensor.addcmul.html)|Not Support|N/A| -|[Tensor.addcmul_](https://pytorch.org/docs/2.1/generated/torch.Tensor.addcmul_.html)|Not Support|N/A| +|[Tensor.addcmul_](https://pytorch.org/docs/2.1/generated/torch.Tensor.addcmul_.html)|Beta|N/A| |[Tensor.addmm](https://pytorch.org/docs/2.1/generated/torch.Tensor.addmm.html)|Not Support|N/A| |[Tensor.addmm_](https://pytorch.org/docs/2.1/generated/torch.Tensor.addmm_.html)|Not Support|N/A| |[Tensor.sspaddmm](https://pytorch.org/docs/2.1/generated/torch.Tensor.sspaddmm.html)|Not Support|N/A| @@ -70,14 +70,14 @@ |[Tensor.arctan2_](https://pytorch.org/docs/2.1/generated/torch.Tensor.arctan2_.html)|Not Support|N/A| |[Tensor.all](https://pytorch.org/docs/2.1/generated/torch.Tensor.all.html)|Not Support|N/A| |[Tensor.any](https://pytorch.org/docs/2.1/generated/torch.Tensor.any.html)|Not Support|N/A| -|[Tensor.backward](https://pytorch.org/docs/2.1/generated/torch.Tensor.backward.html)|Not Support|N/A| +|[Tensor.backward](https://pytorch.org/docs/2.1/generated/torch.Tensor.backward.html)|Demo|N/A| |[Tensor.baddbmm](https://pytorch.org/docs/2.1/generated/torch.Tensor.baddbmm.html)|Not Support|N/A| |[Tensor.baddbmm_](https://pytorch.org/docs/2.1/generated/torch.Tensor.baddbmm_.html)|Not Support|N/A| |[Tensor.bernoulli](https://pytorch.org/docs/2.1/generated/torch.Tensor.bernoulli.html)|Not Support|N/A| |[Tensor.bernoulli_](https://pytorch.org/docs/2.1/generated/torch.Tensor.bernoulli_.html)|Not Support|N/A| |[Tensor.bfloat16](https://pytorch.org/docs/2.1/generated/torch.Tensor.bfloat16.html)|Not Support|N/A| |[Tensor.bincount](https://pytorch.org/docs/2.1/generated/torch.Tensor.bincount.html)|Not Support|N/A| -|[Tensor.bitwise_not](https://pytorch.org/docs/2.1/generated/torch.Tensor.bitwise_not.html)|Not Support|N/A| +|[Tensor.bitwise_not](https://pytorch.org/docs/2.1/generated/torch.Tensor.bitwise_not.html)|Demo|N/A| |[Tensor.bitwise_not_](https://pytorch.org/docs/2.1/generated/torch.Tensor.bitwise_not_.html)|Not Support|N/A| |[Tensor.bitwise_and](https://pytorch.org/docs/2.1/generated/torch.Tensor.bitwise_and.html)|Not Support|N/A| |[Tensor.bitwise_and_](https://pytorch.org/docs/2.1/generated/torch.Tensor.bitwise_and_.html)|Not Support|N/A| @@ -157,7 +157,7 @@ |[Tensor.diff](https://pytorch.org/docs/2.1/generated/torch.Tensor.diff.html)|Not Support|N/A| |[Tensor.digamma](https://pytorch.org/docs/2.1/generated/torch.Tensor.digamma.html)|Not Support|N/A| |[Tensor.digamma_](https://pytorch.org/docs/2.1/generated/torch.Tensor.digamma_.html)|Not Support|N/A| -|[Tensor.dim](https://pytorch.org/docs/2.1/generated/torch.Tensor.dim.html)|Not Support|N/A| +|[Tensor.dim](https://pytorch.org/docs/2.1/generated/torch.Tensor.dim.html)|Stable|N/A| |[Tensor.dim_order](https://pytorch.org/docs/2.1/generated/torch.Tensor.dim_order.html)|Not Support|N/A| |[Tensor.dist](https://pytorch.org/docs/2.1/generated/torch.Tensor.dist.html)|Not Support|N/A| |[Tensor.div](https://pytorch.org/docs/2.1/generated/torch.Tensor.div.html)|Not Support|N/A| @@ -167,7 +167,7 @@ |[Tensor.dot](https://pytorch.org/docs/2.1/generated/torch.Tensor.dot.html)|Not Support|N/A| |[Tensor.double](https://pytorch.org/docs/2.1/generated/torch.Tensor.double.html)|Not Support|N/A| |[Tensor.dsplit](https://pytorch.org/docs/2.1/generated/torch.Tensor.dsplit.html)|Not Support|N/A| -|[Tensor.element_size](https://pytorch.org/docs/2.1/generated/torch.Tensor.element_size.html)|Not Support|N/A| +|[Tensor.element_size](https://pytorch.org/docs/2.1/generated/torch.Tensor.element_size.html)|Beta|N/A| |[Tensor.eq](https://pytorch.org/docs/2.1/generated/torch.Tensor.eq.html)|Not Support|N/A| |[Tensor.eq_](https://pytorch.org/docs/2.1/generated/torch.Tensor.eq_.html)|Not Support|N/A| |[Tensor.equal](https://pytorch.org/docs/2.1/generated/torch.Tensor.equal.html)|Not Support|N/A| @@ -181,7 +181,7 @@ |[Tensor.exp_](https://pytorch.org/docs/2.1/generated/torch.Tensor.exp_.html)|Not Support|N/A| |[Tensor.expm1](https://pytorch.org/docs/2.1/generated/torch.Tensor.expm1.html)|Not Support|N/A| |[Tensor.expm1_](https://pytorch.org/docs/2.1/generated/torch.Tensor.expm1_.html)|Not Support|N/A| -|[Tensor.expand](https://pytorch.org/docs/2.1/generated/torch.Tensor.expand.html)|Not Support|N/A| +|[Tensor.expand](https://pytorch.org/docs/2.1/generated/torch.Tensor.expand.html)|Stable|N/A| |[Tensor.expand_as](https://pytorch.org/docs/2.1/generated/torch.Tensor.expand_as.html)|Not Support|N/A| |[Tensor.exponential_](https://pytorch.org/docs/2.1/generated/torch.Tensor.exponential_.html)|Not Support|N/A| |[Tensor.fix](https://pytorch.org/docs/2.1/generated/torch.Tensor.fix.html)|Not Support|N/A| @@ -191,7 +191,7 @@ |[Tensor.flip](https://pytorch.org/docs/2.1/generated/torch.Tensor.flip.html)|Not Support|N/A| |[Tensor.fliplr](https://pytorch.org/docs/2.1/generated/torch.Tensor.fliplr.html)|Not Support|N/A| |[Tensor.flipud](https://pytorch.org/docs/2.1/generated/torch.Tensor.flipud.html)|Not Support|N/A| -|[Tensor.float](https://pytorch.org/docs/2.1/generated/torch.Tensor.float.html)|Not Support|N/A| +|[Tensor.float](https://pytorch.org/docs/2.1/generated/torch.Tensor.float.html)|Stable|N/A| |[Tensor.float_power](https://pytorch.org/docs/2.1/generated/torch.Tensor.float_power.html)|Not Support|N/A| |[Tensor.float_power_](https://pytorch.org/docs/2.1/generated/torch.Tensor.float_power_.html)|Not Support|N/A| |[Tensor.floor](https://pytorch.org/docs/2.1/generated/torch.Tensor.floor.html)|Stable|支持数据类型:fp16、fp32| @@ -257,9 +257,9 @@ |[Tensor.is_contiguous](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_contiguous.html)|Not Support|N/A| |[Tensor.is_complex](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_complex.html)|Not Support|N/A| |[Tensor.is_conj](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_conj.html)|Not Support|N/A| -|[Tensor.is_floating_point](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_floating_point.html)|Not Support|N/A| +|[Tensor.is_floating_point](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_floating_point.html)|Beta|N/A| |[Tensor.is_inference](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_inference.html)|Not Support|N/A| -|[Tensor.is_leaf](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_leaf.html)|Not Support|N/A| +|[Tensor.is_leaf](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_leaf.html)|Beta|N/A| |[Tensor.is_pinned](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_pinned.html)|Not Support|N/A| |[Tensor.is_set_to](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_set_to.html)|Not Support|N/A| |[Tensor.is_shared](https://pytorch.org/docs/2.1/generated/torch.Tensor.is_shared.html)|Not Support|N/A| @@ -278,7 +278,7 @@ |[Tensor.less_equal](https://pytorch.org/docs/2.1/generated/torch.Tensor.less_equal.html)|Not Support|N/A| |[Tensor.less_equal_](https://pytorch.org/docs/2.1/generated/torch.Tensor.less_equal_.html)|Not Support|N/A| |[Tensor.lerp](https://pytorch.org/docs/2.1/generated/torch.Tensor.lerp.html)|Not Support|N/A| -|[Tensor.lerp_](https://pytorch.org/docs/2.1/generated/torch.Tensor.lerp_.html)|Not Support|N/A| +|[Tensor.lerp_](https://pytorch.org/docs/2.1/generated/torch.Tensor.lerp_.html)|Beta|N/A| |[Tensor.lgamma](https://pytorch.org/docs/2.1/generated/torch.Tensor.lgamma.html)|Not Support|N/A| |[Tensor.lgamma_](https://pytorch.org/docs/2.1/generated/torch.Tensor.lgamma_.html)|Not Support|N/A| |[Tensor.log](https://pytorch.org/docs/2.1/generated/torch.Tensor.log.html)|Stable|支持数据类型:bf16、fp16、fp32、fp64、uint8、int8、int16、int32、int64、bool| @@ -313,15 +313,15 @@ |[Tensor.lu_solve](https://pytorch.org/docs/2.1/generated/torch.Tensor.lu_solve.html)|Not Support|N/A| |[Tensor.as_subclass](https://pytorch.org/docs/2.1/generated/torch.Tensor.as_subclass.html)|Not Support|N/A| |[Tensor.map_](https://pytorch.org/docs/2.1/generated/torch.Tensor.map_.html)|Not Support|N/A| -|[Tensor.masked_scatter_](https://pytorch.org/docs/2.1/generated/torch.Tensor.masked_scatter_.html)|Not Support|N/A| +|[Tensor.masked_scatter_](https://pytorch.org/docs/2.1/generated/torch.Tensor.masked_scatter_.html)|Beta|N/A| |[Tensor.masked_scatter](https://pytorch.org/docs/2.1/generated/torch.Tensor.masked_scatter.html)|Not Support|N/A| |[Tensor.masked_fill_](https://pytorch.org/docs/2.1/generated/torch.Tensor.masked_fill_.html)|Not Support|N/A| -|[Tensor.masked_fill](https://pytorch.org/docs/2.1/generated/torch.Tensor.masked_fill.html)|Not Support|N/A| +|[Tensor.masked_fill](https://pytorch.org/docs/2.1/generated/torch.Tensor.masked_fill.html)|Stable|N/A| |[Tensor.masked_select](https://pytorch.org/docs/2.1/generated/torch.Tensor.masked_select.html)|Not Support|N/A| |[Tensor.matmul](https://pytorch.org/docs/2.1/generated/torch.Tensor.matmul.html)|Stable|支持数据类型:fp16、fp32| |[Tensor.matrix_power](https://pytorch.org/docs/2.1/generated/torch.Tensor.matrix_power.html)|Not Support|N/A| |[Tensor.matrix_exp](https://pytorch.org/docs/2.1/generated/torch.Tensor.matrix_exp.html)|Not Support|N/A| -|[Tensor.max](https://pytorch.org/docs/2.1/generated/torch.Tensor.max.html)|Not Support|N/A| +|[Tensor.max](https://pytorch.org/docs/2.1/generated/torch.Tensor.max.html)|Stable|N/A| |[Tensor.maximum](https://pytorch.org/docs/2.1/generated/torch.Tensor.maximum.html)|Not Support|N/A| |[Tensor.mean](https://pytorch.org/docs/2.1/generated/torch.Tensor.mean.html)|Not Support|N/A| |[Tensor.module_load](https://pytorch.org/docs/2.1/generated/torch.Tensor.module_load.html)|Not Support|N/A| @@ -345,11 +345,11 @@ |[Tensor.mvlgamma](https://pytorch.org/docs/2.1/generated/torch.Tensor.mvlgamma.html)|Not Support|N/A| |[Tensor.mvlgamma_](https://pytorch.org/docs/2.1/generated/torch.Tensor.mvlgamma_.html)|Not Support|N/A| |[Tensor.nansum](https://pytorch.org/docs/2.1/generated/torch.Tensor.nansum.html)|Not Support|N/A| -|[Tensor.narrow](https://pytorch.org/docs/2.1/generated/torch.Tensor.narrow.html)|Not Support|N/A| +|[Tensor.narrow](https://pytorch.org/docs/2.1/generated/torch.Tensor.narrow.html)|Stable|N/A| |[Tensor.narrow_copy](https://pytorch.org/docs/2.1/generated/torch.Tensor.narrow_copy.html)|Not Support|N/A| |[Tensor.ndimension](https://pytorch.org/docs/2.1/generated/torch.Tensor.ndimension.html)|Not Support|N/A| |[Tensor.nan_to_num](https://pytorch.org/docs/2.1/generated/torch.Tensor.nan_to_num.html)|Not Support|N/A| -|[Tensor.nan_to_num_](https://pytorch.org/docs/2.1/generated/torch.Tensor.nan_to_num_.html)|Not Support|N/A| +|[Tensor.nan_to_num_](https://pytorch.org/docs/2.1/generated/torch.Tensor.nan_to_num_.html)|Beta|N/A| |[Tensor.ne](https://pytorch.org/docs/2.1/generated/torch.Tensor.ne.html)|Stable|支持数据类型:bf16、fp16、fp32、uint8、int8、int16、int32、int64、bool| |[Tensor.ne_](https://pytorch.org/docs/2.1/generated/torch.Tensor.ne_.html)|Not Support|N/A| |[Tensor.not_equal](https://pytorch.org/docs/2.1/generated/torch.Tensor.not_equal.html)|Not Support|N/A| @@ -358,13 +358,13 @@ |[Tensor.neg_](https://pytorch.org/docs/2.1/generated/torch.Tensor.neg_.html)|Not Support|N/A| |[Tensor.negative](https://pytorch.org/docs/2.1/generated/torch.Tensor.negative.html)|Not Support|N/A| |[Tensor.negative_](https://pytorch.org/docs/2.1/generated/torch.Tensor.negative_.html)|Not Support|N/A| -|[Tensor.nelement](https://pytorch.org/docs/2.1/generated/torch.Tensor.nelement.html)|Not Support|N/A| +|[Tensor.nelement](https://pytorch.org/docs/2.1/generated/torch.Tensor.nelement.html)|Stable|N/A| |[Tensor.nextafter](https://pytorch.org/docs/2.1/generated/torch.Tensor.nextafter.html)|Not Support|N/A| |[Tensor.nextafter_](https://pytorch.org/docs/2.1/generated/torch.Tensor.nextafter_.html)|Not Support|N/A| |[Tensor.nonzero](https://pytorch.org/docs/2.1/generated/torch.Tensor.nonzero.html)|Not Support|N/A| |[Tensor.norm](https://pytorch.org/docs/2.1/generated/torch.Tensor.norm.html)|Not Support|N/A| |[Tensor.normal_](https://pytorch.org/docs/2.1/generated/torch.Tensor.normal_.html)|Not Support|N/A| -|[Tensor.numel](https://pytorch.org/docs/2.1/generated/torch.Tensor.numel.html)|Not Support|N/A| +|[Tensor.numel](https://pytorch.org/docs/2.1/generated/torch.Tensor.numel.html)|Stable|N/A| |[Tensor.numpy](https://pytorch.org/docs/2.1/generated/torch.Tensor.numpy.html)|Not Support|N/A| |[Tensor.orgqr](https://pytorch.org/docs/2.1/generated/torch.Tensor.orgqr.html)|Not Support|N/A| |[Tensor.ormqr](https://pytorch.org/docs/2.1/generated/torch.Tensor.ormqr.html)|Not Support|N/A| @@ -394,22 +394,22 @@ |[Tensor.reciprocal](https://pytorch.org/docs/2.1/generated/torch.Tensor.reciprocal.html)|Not Support|N/A| |[Tensor.reciprocal_](https://pytorch.org/docs/2.1/generated/torch.Tensor.reciprocal_.html)|Not Support|N/A| |[Tensor.record_stream](https://pytorch.org/docs/2.1/generated/torch.Tensor.record_stream.html)|Not Support|N/A| -|[Tensor.register_hook](https://pytorch.org/docs/2.1/generated/torch.Tensor.register_hook.html)|Not Support|N/A| +|[Tensor.register_hook](https://pytorch.org/docs/2.1/generated/torch.Tensor.register_hook.html)|Beta|N/A| |[Tensor.register_post_accumulate_grad_hook](https://pytorch.org/docs/2.1/generated/torch.Tensor.register_post_accumulate_grad_hook.html)|Not Support|N/A| |[Tensor.remainder](https://pytorch.org/docs/2.1/generated/torch.Tensor.remainder.html)|Not Support|N/A| |[Tensor.remainder_](https://pytorch.org/docs/2.1/generated/torch.Tensor.remainder_.html)|Not Support|N/A| |[Tensor.renorm](https://pytorch.org/docs/2.1/generated/torch.Tensor.renorm.html)|Not Support|N/A| |[Tensor.renorm_](https://pytorch.org/docs/2.1/generated/torch.Tensor.renorm_.html)|Not Support|N/A| -|[Tensor.repeat](https://pytorch.org/docs/2.1/generated/torch.Tensor.repeat.html)|Not Support|N/A| +|[Tensor.repeat](https://pytorch.org/docs/2.1/generated/torch.Tensor.repeat.html)|Stable|N/A| |[Tensor.repeat_interleave](https://pytorch.org/docs/2.1/generated/torch.Tensor.repeat_interleave.html)|Not Support|N/A| -|[Tensor.requires_grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.requires_grad.html)|Not Support|N/A| -|[Tensor.requires_grad_](https://pytorch.org/docs/2.1/generated/torch.Tensor.requires_grad_.html)|Not Support|N/A| -|[Tensor.reshape](https://pytorch.org/docs/2.1/generated/torch.Tensor.reshape.html)|Not Support|N/A| +|[Tensor.requires_grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.requires_grad.html)|Beta|N/A| +|[Tensor.requires_grad_](https://pytorch.org/docs/2.1/generated/torch.Tensor.requires_grad_.html)|Beta|N/A| +|[Tensor.reshape](https://pytorch.org/docs/2.1/generated/torch.Tensor.reshape.html)|Stable|N/A| |[Tensor.reshape_as](https://pytorch.org/docs/2.1/generated/torch.Tensor.reshape_as.html)|Not Support|N/A| -|[Tensor.resize_](https://pytorch.org/docs/2.1/generated/torch.Tensor.resize_.html)|Not Support|N/A| +|[Tensor.resize_](https://pytorch.org/docs/2.1/generated/torch.Tensor.resize_.html)|Beta|N/A| |[Tensor.resize_as_](https://pytorch.org/docs/2.1/generated/torch.Tensor.resize_as_.html)|Not Support|N/A| -|[Tensor.retain_grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.retain_grad.html)|Not Support|N/A| -|[Tensor.retains_grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.retains_grad.html)|Not Support|N/A| +|[Tensor.retain_grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.retain_grad.html)|Beta|N/A| +|[Tensor.retains_grad](https://pytorch.org/docs/2.1/generated/torch.Tensor.retains_grad.html)|Beta|N/A| |[Tensor.roll](https://pytorch.org/docs/2.1/generated/torch.Tensor.roll.html)|Not Support|N/A| |[Tensor.rot90](https://pytorch.org/docs/2.1/generated/torch.Tensor.rot90.html)|Not Support|N/A| |[Tensor.round](https://pytorch.org/docs/2.1/generated/torch.Tensor.round.html)|Not Support|N/A| @@ -445,12 +445,12 @@ |[Tensor.arcsinh](https://pytorch.org/docs/2.1/generated/torch.Tensor.arcsinh.html)|Not Support|N/A| |[Tensor.arcsinh_](https://pytorch.org/docs/2.1/generated/torch.Tensor.arcsinh_.html)|Not Support|N/A| |[Tensor.shape](https://pytorch.org/docs/2.1/generated/torch.Tensor.shape.html)|Not Support|N/A| -|[Tensor.size](https://pytorch.org/docs/2.1/generated/torch.Tensor.size.html)|Not Support|N/A| +|[Tensor.size](https://pytorch.org/docs/2.1/generated/torch.Tensor.size.html)|Stable|N/A| |[Tensor.slogdet](https://pytorch.org/docs/2.1/generated/torch.Tensor.slogdet.html)|Not Support|N/A| |[Tensor.slice_scatter](https://pytorch.org/docs/2.1/generated/torch.Tensor.slice_scatter.html)|Not Support|N/A| |[Tensor.softmax](https://pytorch.org/docs/2.1/generated/torch.Tensor.softmax.html)|Not Support|N/A| |[Tensor.sort](https://pytorch.org/docs/2.1/generated/torch.Tensor.sort.html)|Not Support|N/A| -|[Tensor.split](https://pytorch.org/docs/2.1/generated/torch.Tensor.split.html)|Not Support|N/A| +|[Tensor.split](https://pytorch.org/docs/2.1/generated/torch.Tensor.split.html)|Stable|N/A| |[Tensor.sparse_mask](https://pytorch.org/docs/2.1/generated/torch.Tensor.sparse_mask.html)|Not Support|N/A| |[Tensor.sparse_dim](https://pytorch.org/docs/2.1/generated/torch.Tensor.sparse_dim.html)|Not Support|N/A| |[Tensor.sqrt](https://pytorch.org/docs/2.1/generated/torch.Tensor.sqrt.html)|Stable|支持数据类型:bf16、fp16、fp32、fp64、uint8、int8、int16、int32、int64、bool| @@ -458,7 +458,7 @@ |[Tensor.square](https://pytorch.org/docs/2.1/generated/torch.Tensor.square.html)|Stable|支持数据类型:fp16、fp32、fp64、uint8、int8、int16、int32、int64| |[Tensor.square_](https://pytorch.org/docs/2.1/generated/torch.Tensor.square_.html)|Not Support|N/A| |[Tensor.squeeze](https://pytorch.org/docs/2.1/generated/torch.Tensor.squeeze.html)|Not Support|N/A| -|[Tensor.squeeze_](https://pytorch.org/docs/2.1/generated/torch.Tensor.squeeze_.html)|Not Support|N/A| +|[Tensor.squeeze_](https://pytorch.org/docs/2.1/generated/torch.Tensor.squeeze_.html)|Stable|N/A| |[Tensor.std](https://pytorch.org/docs/2.1/generated/torch.Tensor.std.html)|Not Support|N/A| |[Tensor.stft](https://pytorch.org/docs/2.1/generated/torch.Tensor.stft.html)|Not Support|N/A| |[Tensor.storage](https://pytorch.org/docs/2.1/generated/torch.Tensor.storage.html)|Not Support|N/A| @@ -478,7 +478,7 @@ |[Tensor.t](https://pytorch.org/docs/2.1/generated/torch.Tensor.t.html)|Not Support|N/A| |[Tensor.t_](https://pytorch.org/docs/2.1/generated/torch.Tensor.t_.html)|Not Support|N/A| |[Tensor.tensor_split](https://pytorch.org/docs/2.1/generated/torch.Tensor.tensor_split.html)|Not Support|N/A| -|[Tensor.tile](https://pytorch.org/docs/2.1/generated/torch.Tensor.tile.html)|Not Support|N/A| +|[Tensor.tile](https://pytorch.org/docs/2.1/generated/torch.Tensor.tile.html)|Stable|N/A| |[Tensor.to](https://pytorch.org/docs/2.1/generated/torch.Tensor.to.html)|Not Support|N/A| |[Tensor.to_mkldnn](https://pytorch.org/docs/2.1/generated/torch.Tensor.to_mkldnn.html)|Not Support|N/A| |[Tensor.take](https://pytorch.org/docs/2.1/generated/torch.Tensor.take.html)|Not Support|N/A| @@ -501,9 +501,9 @@ |[Tensor.to_sparse_bsc](https://pytorch.org/docs/2.1/generated/torch.Tensor.to_sparse_bsc.html)|Not Support|N/A| |[Tensor.trace](https://pytorch.org/docs/2.1/generated/torch.Tensor.trace.html)|Not Support|N/A| |[Tensor.transpose](https://pytorch.org/docs/2.1/generated/torch.Tensor.transpose.html)|Not Support|N/A| -|[Tensor.transpose_](https://pytorch.org/docs/2.1/generated/torch.Tensor.transpose_.html)|Not Support|N/A| +|[Tensor.transpose_](https://pytorch.org/docs/2.1/generated/torch.Tensor.transpose_.html)|Beta|N/A| |[Tensor.triangular_solve](https://pytorch.org/docs/2.1/generated/torch.Tensor.triangular_solve.html)|Not Support|N/A| -|[Tensor.tril](https://pytorch.org/docs/2.1/generated/torch.Tensor.tril.html)|Not Support|N/A| +|[Tensor.tril](https://pytorch.org/docs/2.1/generated/torch.Tensor.tril.html)|Demo|N/A| |[Tensor.tril_](https://pytorch.org/docs/2.1/generated/torch.Tensor.tril_.html)|Not Support|N/A| |[Tensor.triu](https://pytorch.org/docs/2.1/generated/torch.Tensor.triu.html)|Not Support|N/A| |[Tensor.triu_](https://pytorch.org/docs/2.1/generated/torch.Tensor.triu_.html)|Demo|N/A| @@ -524,8 +524,8 @@ |[Tensor.values](https://pytorch.org/docs/2.1/generated/torch.Tensor.values.html)|Not Support|N/A| |[Tensor.var](https://pytorch.org/docs/2.1/generated/torch.Tensor.var.html)|Not Support|N/A| |[Tensor.vdot](https://pytorch.org/docs/2.1/generated/torch.Tensor.vdot.html)|Not Support|N/A| -|[Tensor.view](https://pytorch.org/docs/2.1/generated/torch.Tensor.view.html)|Not Support|N/A| -|[Tensor.view_as](https://pytorch.org/docs/2.1/generated/torch.Tensor.view_as.html)|Not Support|N/A| +|[Tensor.view](https://pytorch.org/docs/2.1/generated/torch.Tensor.view.html)|Stable|N/A| +|[Tensor.view_as](https://pytorch.org/docs/2.1/generated/torch.Tensor.view_as.html)|Stable|N/A| |[Tensor.vsplit](https://pytorch.org/docs/2.1/generated/torch.Tensor.vsplit.html)|Not Support|N/A| |[Tensor.where](https://pytorch.org/docs/2.1/generated/torch.Tensor.where.html)|Not Support|N/A| |[Tensor.xlogy](https://pytorch.org/docs/2.1/generated/torch.Tensor.xlogy.html)|Not Support|N/A|