# learnCSRNet **Repository Path**: alfa__cv__ai/learnCSRNet ## Basic Information - **Project Name**: learnCSRNet - **Description**: CSRNet复现 - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 2 - **Created**: 2021-03-29 - **Last Updated**: 2021-03-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ## CSRNet学习 CSRNet是对图片中人进行计数 源码主要参考[原作者源码](https://github.com/leeyeehoo/CSRNet-pytorch),然后是对源码的一些修改适配我的机器安装环境版本。环境配置见仓库中`requirements.txt`(使用pip freeze导出的当前环境,torch1.5.0 cuda版本,其他环境可以注释掉我的torch相关包以及flask相关的包后使用`pip install -r requirements.txt`安装在新的虚拟环境中) ### step1 install 环境见上面提到的`requirements.txt`,python3.8/torch 1.5.0 cuda ### step2 make_dataset 对notebook中一些print格式调整为python3的写法,然后更改存放数据集的位置 ![image-20200715134100477](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195604.png) 之后有一个报错,定位在下面标出的位置,添加list后可以正常运行 ![1553081148694](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195605.png) 生成密度图 ![image-20200715134446590](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195606.png) 查看生成的密度图 **![image-20200715134511486](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195607.png)** 仿照上面的代码改写单独对一张图生成密度图的程序,并使用调试模式对部分变量进行查看,见test.py(大致按照上面的改过来的,将上面for循环全部图片的密度图生成改为单个图片的密度图生成),并进行人数统计 k的值 ![image-20200715153310630](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195608.png) ```python gt = mat["image_info"][0,0][0,0][0] ``` ![image-20200715153854993](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195609.png) ```python for i in range(0,len(gt)): if int(gt[i][1]) 1: sigma = (distances[i][1]+distances[i][2]+distances[i][3])*0.1 else: sigma = np.average(np.array(gt.shape))/2./2. #case: 1 point density += scipy.ndimage.filters.gaussian_filter(pt2d, sigma, mode='constant') print('done.') ``` ![image-20200715155857463](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195617.png) ```python k = density with h5py.File(img_path.replace('.jpg','.h5').replace('images','ground_truth'), 'w') as hf: hf['density'] = k ``` ![image-20200715155947204](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195618.png) 绘制图片的真实值 ```python gt_file = h5py.File(img_paths[0].replace('.jpg','.h5').replace('images','ground_truth'),'r') ``` ![image-20200715160408351](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195619.png) ```python groundtruth = np.asarray(gt_file['density']) ``` ![image-20200715160458639](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195620.png) 绘制 ```python plt.imshow(groundtruth, cmap=CM.jet) ``` ![1553089178172](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195621.png) ```python num = np.sum(groundtruth) print("人数统计:" + num) ``` ![image-20200715160912980](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195622.png)
类似上面的操作,可以对数据集中所有图片生成真实值,下面的部分操作按照使用gpu 运行make_dataset.ipynb或者将make_dataset.ipynb转换为py(使用`jupyter nbconvert --to script make_dataset.ipynb #Convert .ipynb file to .py file`)后执行`python make_dataset.py`就可以得到所有图片的真实值。 ### step3 Training 下面步骤可以参考[论文原作者代码中readme](https://github.com/leeyeehoo/CSRNet-pytorch/blob/master/README.md)的步骤,这里针对python3做了一些改写。 **Note**: 使用python3可以进行一下改写,如果是和作者一样的环境就不用改动了 ```python 1. In model.py, change the xrange in line 18 to range 2. In model.py, change line 19 to: list(self.frontend.state_dict().items())[i][1].data[:] = list(mod.state_dict().items())[i][1].data[:] 3. In image.py, change line 40 to: target = cv2.resize(target,(target.shape[1]//8,target.shape[0]//8),interpolation = cv2.INTER_CUBIC)*64 ``` - In part_A_train.json:change the path of images - In part_A_val.json: change the path of images run ``` python train.py part_A_train.json part_A_val.json 0 0 ``` ![image-20200715163157094](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195623.png) ## step4 Testing 测试图片,共182张 ![image-20200715163404149](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195624.png) 将notebook转换为py ```shell jupyter nbconvert --to script val.ipynb ``` 最后,测试这个模型的表现。使用val.py验证结果,将路径改为预训练权重和图像 ```python python val.py ``` The average absolute error value that can be obtained by running this val.py file code 总共182张 ![image-20200715194856093](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195625.png) 这个值不太正常,主要是训练时间不够,我只跑了6个epoches,作者设定的是400。作者在val.ipynb中的结果是79。这里之后添加400完整的。 然后可以测试单张图片 ```python python test_single-image.py ``` ![](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195626.png)![image-20200715195240284](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195627.png) ![1553157772668](https://raw.githubusercontent.com/lonelyislandXD/picLib/master/img/20200715195319.png) 效果不算特别好,作者设定的epoches为400,为了减少训练时间,我调整为10,可能模型训练的效果还不是很好。