Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

Related tags

Deep Learningssai-cnn
Overview

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are published in this paper.

Requirements

  • Python 3.5 (anaconda with python 3.5.1 is recommended)
    • Chainer 1.5.0.2
    • Cython 0.23.4
    • NumPy 1.10.1
    • tqdm
  • OpenCV 3.0.0
  • lmdb 0.87
  • Boost 1.59.0
  • Boost.NumPy (26aaa5b)

Build Libraries

OpenCV 3.0.0

$ wget https://github.com/Itseez/opencv/archive/3.0.0.zip
$ unzip 3.0.0.zip && rm -rf 3.0.0.zip
$ cd opencv-3.0.0 && mkdir build && cd build
$ bash $SSAI_HOME/shells/build_opencv.sh
$ make -j32 install

If some libraries are missing, do below before compiling 3.0.0.

$ sudo apt-get install -y libopencv-dev libtbb-dev

Boost 1.59. 0

$ wget http://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.bz2
$ tar xvf boost_1_59_0.tar.bz2 && rm -rf boost_1_59_0.tar.bz2
$ cd boost_1_59_0
$ ./bootstrap.sh
$ ./b2 -j32 install cxxflags="-I/home/ubuntu/anaconda3/include/python3.5m"

Boost.NumPy

$ git clone https://github.com/ndarray/Boost.NumPy.git
$ cd Boost.NumPy && mkdir build && cd build
$ cmake -DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../
$ make install

Build utils

$ cd $SSAI_HOME/scripts/utils
$ bash build.sh

Create Dataset

$ bash shells/download.sh
$ bash shells/create_dataset.sh
Dataset Training Validation Test
mass_roads 8580352 108416 379456
mass_roads_mini 1060928 30976 77440
mass_buildings 1060928 30976 77440
mass_merged 1060928 30976 77440

Start Training

$ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
nohup python scripts/train.py \
--seed 0 \
--gpu 0 \
--model models/MnihCNN_multi.py \
--train_ortho_db data/mass_merged/lmdb/train_sat \
--train_label_db data/mass_merged/lmdb/train_map \
--valid_ortho_db data/mass_merged/lmdb/valid_sat \
--valid_label_db data/mass_merged/lmdb/valid_map \
--dataset_size 1.0 \
> mnih_multi.log 2>&1 < /dev/null &

Prediction

python scripts/predict.py \
--model results/MnihCNN_multi_2016-02-03_03-34-58/MnihCNN_multi.py \
--param results/MnihCNN_multi_2016-02-03_03-34-58/epoch-400.model \
--test_sat_dir data/mass_merged/test/sat \
--channels 3 \
--offset 8 \
--gpu 0 &

Evaluation

$ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py \
--map_dir data/mass_merged/test/map \
--result_dir results/MnihCNN_multi_2016-02-03_03-34-58/ma_prediction_400 \
--channel 3 \
--offset 8 \
--relax 3 \
--steps 1024

Results

Conventional methods

Model Mass. Buildings Mass. Roads Mass.Roads-Mini
MnihCNN 0.9150 0.8873 N/A
MnihCNN + CRF 0.9211 0.8904 N/A
MnihCNN + Post-processing net 0.9203 0.9006 N/A
Single-channel 0.9503062 0.91730195 (epoch 120) 0.89989258
Single-channel with MA 0.953766 0.91903522 (epoch 120) 0.902895

Multi-channel models (epoch = 400, step = 1024)

Model Building-channel Road-channel Road-channel (fixed)
Multi-channel 0.94346856 0.89379946 0.9033020025
Multi-channel with MA 0.95231262 0.89971473 0.90982972
Multi-channel with CIS 0.94417078 0.89415726 0.9039476538
Multi-channel with CIS + MA 0.95280431 0.90071099 0.91108087

Test on urban areas (epoch = 400, step = 1024)

Model Building-channel Road-channel
Single-channel with MA 0.962133 0.944748
Multi-channel with MA 0.962797 0.947224
Multi-channel with CIS + MA 0.964499 0.950465

x0_sigma for inverting feature maps

159.348674296

After prediction for single MA

$ bash shells/predict.sh
$ python scripts/integrate.py --result_dir results --epoch 200 --size 7,60
$ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py --map_dir data/mass_merged/test/map --result_dir results/integrated_200 --channel 3 --offset 8 --relax 3 --steps 256
$ PYTHONPATH="." python scripts/eval_urban.py --result_dir results/integrated_200 --test_map_dir data/mass_merged/test/map --steps 256

Pre-trained models and Predicted results

Reference

If you use this code for your project, please cite this journal paper:

Shunta Saito, Takayoshi Yamashita, Yoshimitsu Aoki, "Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks", Journal of Imaging Science and Technology, Vol. 60, No. 1, pp. 10402-1-10402-9, 2015

Comments
  • cannot reshape array of size 3 into shape (92,92,3)

    cannot reshape array of size 3 into shape (92,92,3)

    Hi there! I'm having trouble to run this bash shells/create_dataset.sh

    The error is this: patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3)

    Any ideas? Thank you very much for your time!

    opened by flor88 8
  • Adam optimisation error

    Adam optimisation error

    Hi,

    I am trying set Adam optimization algorithm but receiving the following error, can you please help me how to set parameters?

    python3.5/site-packages/chainer-1.22.0-py3.5-linux-x86_64.egg/chainer/optimizers/adam.py", line 51, in lr return self.alpha * math.sqrt(fix2) / fix1 ZeroDivisionError: float division by zero

    Please see the below parameters what i have set.

    parser.add_argument('--opt', type=str, default='Adam', choices=['MomentumSGD', 'Adam', 'AdaGrad'])
    parser.add_argument('--weight_decay', type=float, default=0.0001)
    parser.add_argument('--alpha', type=float, default=0.001)
    parser.add_argument('--lr', type=float, default=0.0001)
    parser.add_argument('--lr_decay_freq', type=int, default=100)
    parser.add_argument('--lr_decay_ratio', type=float, default=0.1)
    parser.add_argument('--seed', type=int, default=1701)
    args = parser.parse_args()
    

    Thank you

    opened by Tejuwi 3
  • Warning: don't setup your environment using anaconda.

    Warning: don't setup your environment using anaconda.

    This project use Chianer as its deep learning framework. However, Chianer is based on a Cuda driver named CUPY, which is(seemingly) not supported by any Anaconda library. Therefore setup environment using anaconda will always result in fail to find cupy module.

    I tried to using system python cupy library and conda chainer library with no luck. If any one has solution, please follow this issue.

    opened by dragon9001 3
  • import Error

    import Error

    I am using python 3.6 on win 10 I run the script create_dataset Found this Error ModuleNotFoundError: No module named 'utils.patches' the Error relates to this lines: from utils.patches import divide_to_patches

    Kindly help..

    opened by mshakaib 2
  • Tesla P100 is slower

    Tesla P100 is slower

    Hi,

    I have tested an algorithm on Tesla p100 (Ubuntu Server16.04 LTS x86_64) and it takes one epoch 2Hrs. I applied same algorithm on Quadro M4000 (Ubuntu desktop 16.04 LTS x86_64) takes 2Hrs 40min.

    We expected that training time will comedown to half of the Quadro M4000 but their no much difference between Tesla p100 and Quadro M4000.

    Please give me guidance so that the training time will be reduced. I appreciate you kind help

    opened by Tejuwi 1
  • cupy.cuda.curand.CURANDError.__init__ (cupy/cuda/curand.cpp:1108)

    cupy.cuda.curand.CURANDError.__init__ (cupy/cuda/curand.cpp:1108)

    I just enter: CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 nohup python scripts/train.py --seed 0 --gpu 0 --model models/MnihCNN_multi.py --train_ortho_db data/mass_merged/lmdb/train_sat --train_label_db data/mass_merged/lmdb/train_map --valid_ortho_db data/mass_merged/lmdb/valid_sat --valid_label_db data/mass_merged/lmdb/valid_map --dataset_size 1.0 > mnih_multi.log 2>&1 < /dev/null & and this is the log: You are running using the stub version of curand .Traceback (most recent call last): File "scripts/train.py", line 300, in xp.random.seed(args.seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 318, in seed get_random_state().seed(seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 350, in get_random_state rs = RandomState(seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 45, in init self._generator = curand.createGenerator(method) File "cupy/cuda/curand.pyx", line 92, in cupy.cuda.curand.createGenerator (cupy/cuda/curand.cpp:1443) File "cupy/cuda/curand.pyx", line 96, in cupy.cuda.curand.createGenerator (cupy/cuda/curand.cpp:1381) File "cupy/cuda/curand.pyx", line 85, in cupy.cuda.curand.check_status (cupy/cuda/curand.cpp:1216) File "cupy/cuda/curand.pyx", line 79, in cupy.cuda.curand.CURANDError.init (cupy/cuda/curand.cpp:1108) KeyError: 51

    I have no idea,help

    opened by FogXcG 0
  • Updated environment versions and instructions

    Updated environment versions and instructions

    Hi, I'm trying to train my own dataset but failed to deploy after many different CUDA, CUDNN, CuPy, Python version trials on Ubuntu 16.04 on NVIDIA GTX-950M.

    Is there any updated environment versions for CUDA, CUDNN, CuPy, Python, Chainer, NumPy?

    Cheers,

    opened by Vol-i 3
  • IndexError: index 76 is out of bounds for axis 1 with size 3

    IndexError: index 76 is out of bounds for axis 1 with size 3

    Hello,

    I am currently trying to automate parts of this project and I am running into difficulties during the training phase using CPU mode, which throws an IndexError and appears to hang the entire training. I am using a very small dataset from the mass_buildings set, i.e. I am using 8 training images and 2 validation images. The purpose is only to test and not to have accurate results at the moment. Below is the state of the installation and steps I am using:

    System:

    uname -a
    Linux user-VirtualBox 4.10.0-28-generic #32~16.04.2-Ubuntu SMP Thu Jul 20 10:19:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
    

    Python (w/o Anaconda):

    $ python -V
    Python 3.5.2
    

    Python modules:

    [email protected]:~/Source/ssai-cnn$ pip3 freeze
    ...
    chainer==1.5.0.2
    ...
    Cython==0.23.4
    ...
    h5py==2.7.1
    ...
    lmdb==0.87
    ...
    matplotlib==2.1.1
    ...
    numpy==1.10.1
    onboard==1.2.0
    opencv-python==3.1.0.3
    ...
    six==1.10.0
    tqdm==4.19.5
    ...
    

    Additionally, Boost 1.59.0 and OpenCV 3.0.0 have been build and installed from source and both installs appears successful. The utils is also built successfully.

    I have downloaded only a small subset of the mass_buildings dataset:

    # ls -R ./data/mass_buildings/train/
    ./data/mass_buildings/train/:
    map  sat
    
    ./data/mass_buildings/train/map:
    22678915_15.tif  22678930_15.tif  22678945_15.tif  22678960_15.tif
    
    ./data/mass_buildings/train/sat:
    22678915_15.tiff  22678930_15.tiff  22678945_15.tiff  22678960_15.tiff
    

    Below is the output obtained by running the shells/create_datasets.sh script, modified only to build the mass_buildings data:

    patch size: 92 24 16
    n_all_files: 1
    divide:0.6727173328399658
    0 / 1 n_patches: 7744
    patches:	 7744
    patch size: 92 24 16
    n_all_files: 1
    divide:0.6314394474029541
    0 / 1 n_patches: 7744
    patches:	 7744
    patch size: 92 24 16
    n_all_files: 4
    divide:0.6260504722595215
    0 / 4 n_patches: 7744
    divide:0.667414665222168
    1 / 4 n_patches: 15488
    divide:0.628319263458252
    2 / 4 n_patches: 23232
    divide:0.6634025573730469
    3 / 4 n_patches: 30976
    patches:	 30976
    0.03437542915344238 sec (128, 3, 64, 64) (128, 16, 16)
    

    Then the training script is initiated using the following command:

    [email protected]:~/Source/ssai-cnn$ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
    > nohup python ./scripts/train.py \
    > --seed 0 \
    > --gpu -1 \
    > --model ./models/MnihCNN_multi.py \
    > --train_orthokill _db data/mass_buildings/lmdb/train_sat \
    > --train_label_db data/mass_buildings/lmdb/train_map \
    > --valid_ortho_db data/mass_buildings/lmdb/valid_sat \
    > --valid_label_db data/mass_buildings/lmdb/valid_map \
    > --dataset_size 1.0 \
    > --epoch 1
    

    As you can see above, I've been using only 8 images and a single epoch. I left the entire process run an entire night and never completed. Hence the reason I believe the process simply hanged. Using nohup also does not complete. When forcefully stopped using Ctrl-C, I'm obtaining the following message:

    # cat nohup.out 
    Traceback (most recent call last):
      File "./scripts/train.py", line 313, in <module>
        model, optimizer = one_epoch(args, model, optimizer, epoch, True)
      File "./scripts/train.py", line 265, in one_epoch
        optimizer.update(model, x, t)
      File "/usr/local/lib/python3.5/dist-packages/chainer/optimizer.py", line 377, in update
        loss = lossfun(*args, **kwds)
      File "./models/MnihCNN_multi.py", line 31, in __call__
        self.loss = F.softmax_cross_entropy(h, t, normalize=False)
      File "/usr/local/lib/python3.5/dist-packages/chainer/functions/loss/softmax_cross_entropy.py", line 152, in softmax_cross_entropy
        return SoftmaxCrossEntropy(use_cudnn, normalize)(x, t)
      File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 105, in __call__
        outputs = self.forward(in_data)
      File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 183, in forward
        return self.forward_cpu(inputs)
      File "/usr/local/lib/python3.5/dist-packages/chainer/functions/loss/softmax_cross_entropy.py", line 39, in forward_cpu
        p = yd[six.moves.range(t.size), numpy.maximum(t.flat, 0)]
    IndexError: index 76 is out of bounds for axis 1 with size 3
    

    This is the only components that fails at this moment. I've tested the prediction and evaluation phases using the pre-trained data and both seems to complete successfully. Any assistance on how I could use the training script using custom datasets would be appreciated.

    Thank you

    opened by InfectedPacket 1
  • Unable to use cudnn (GPU)

    Unable to use cudnn (GPU)

    I am using the following code for training URL : https://github.com/mitmul/ssai-cnn My Environment specifications are given below: Xeon CPU, 128 GB RAM nVidia TITAN Xp Graphics Card Driver Version 384.90 CUDA 9.0, CUDNN 7.0

    Ubuntu 16.04, Anaconda3 Environment on python 3.5.1 chainer 1.5.0.2
    cupy 2.0.0
    curl 7.55.1 boost 1.65.1
    Cython 0.27.3
    h5py 2.7.1
    hdf5 1.10.1
    lmdb 0.87
    matplotlib 2.1.0
    numpy 1.13.3
    opencv 3.3.0
    pillow 4.3.0
    pycuda 2017.1.1
    python 3.5.1
    six 1.11.0
    tqdm 4.19.4

    I am running the training process on dataset of 430 images (Resolution 1500X1500 pixels) CUDNN is configured cuda.cudnn_enabled gives True Training doesn’t run fine in CPU environment . While using GPU environment using the flag CHAINER_CUDNN = 0 (i.e. cudnn disabled) it gives warning message about cudnn and the GPU usage shows only max. 25%. I feel it is not utilizing the GPU environment fully during training as 1 EPOCH takes 2-3 hrs on average, assuming this for 400 epochs it would take us too many 800-1200hrs (33-50 days) to complete the training. +++ Warning message showing CuDNN not enabled for training+++ /home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/cuda.py:85: UserWarning: cuDNN is not enabled. Please reinstall chainer after you install cudnn (see https://github.com/pfnet/chainer#installation). 'cuDNN is not enabled.\n' ++++++++++++++++++++++++++++++++++++++++++++++++++++++

    When I tried to run the training process using cudnn (i.e. CHAINER_CUDNN = 1) then it gives error. +++++ Error message +++++++ 2017-12-08 11:36:34 INFO start training... 2017-12-08 11:36:34 INFO learning rate:0.0005 2017-12-08 11:36:34 INFO random skip:2 Traceback (most recent call last): File "/common/workspace/ssai-cnn/src/scripts/train.py", line 373, in model, optimizer = one_epoch(args, model, optimizer, epoch, True) File "/common/workspace/ssai-cnn/src/scripts/train.py", line 285, in one_epoch optimizer.update(model, x, t) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/optimizer.py", line 377, in update loss = lossfun(*args, **kwds) File "/common/workspace/ssai-cnn/src/models/MnihCNN_multi.py", line 22, in call h = F.relu(self.conv1(x)) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/links/connection/convolution_2d.py", line 74, in call return convolution_2d.convolution_2d(x, self.W, self.b, self.stride, self.pad, self.use_cudnn) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/functions/connection/convolution_2d.py", line 267, in convolution_2d return func(x, W, b) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/function.py", line 105, in call outputs = self.forward(in_data) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/function.py", line 181, in forward return self.forward_gpu(inputs) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/functions/connection/convolution_2d.py", line 80, in forward_gpu (self.ph, self.pw), (self.sy, self.sx)) TypeError: create_convolution_descriptor() missing 1 required positional argument: 'dtype' ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    I just want to confirm if your process had run on cudnn enabled mode is yes then what chainer and cupy and other supporting package versions were used? What will be the challenges on existing code if I have to upgrade the Chainer version 3.0.0 or 3.1.0 , Cupy 2.1.0 and other packages? Which chainer-cupy version combination will be preferable?

    Thanks

    opened by sundeeptewari 4
  • Is it possible to build this for windows?

    Is it possible to build this for windows?

    Hi, I use Windows 10, anaconda3 and Python 3.6 I was wondering if i would be able to build the utils if I change the paths in the file build.sh. e.g. $PYTHON_DIR/lib/libpython3.5m.so to Windows Anaconda equivalent (if there is one). then run bash using cygwin.

    Or should I give up and use a Linux machine? Thanks, Véro

    opened by VeroL 2
  • fatal error: pyconfig.h: No such file or directory  # include <pyconfig.h>

    fatal error: pyconfig.h: No such file or directory # include

    -- Found PythonLibs: /home/yanghuan/.pyenv/versions/anaconda3-2.4.0/lib/libpython3.5m.so
    -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project:

    PYTHON_INCLUDE_DIR2
    

    -- Build files have been written to: /home/yanghuan/下载/ssai-cnn-master/utils Scanning dependencies of target patches [ 16%] Building CXX object CMakeFiles/patches.dir/src/devide_to_patches.cpp.o In file included from /usr/local/include/boost/python/detail/prefix.hpp:13:0, from /usr/local/include/boost/python/args.hpp:8, from /usr/local/include/boost/python.hpp:11, from /home/yanghuan/下载/ssai-cnn-master/utils/src/devide_to_patches.cpp:4: /usr/local/include/boost/python/detail/wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or directory

    include <pyconfig.h>

                       ^
    

    compilation terminated. CMakeFiles/patches.dir/build.make:62: recipe for target 'CMakeFiles/patches.dir/src/devide_to_patches.cpp.o' failed make[2]: *** [CMakeFiles/patches.dir/src/devide_to_patches.cpp.o] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/patches.dir/all' failed make[1]: *** [CMakeFiles/patches.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    opened by yizuifangxiuyh 3
Releases(v1.0.0)
  • v1.0.0(Oct 22, 2018)

    This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are published in this paper.

    Requirements

    • Python 3.5 (anaconda with python 3.5.1 is recommended)
      • Chainer 1.5.0.2
      • Cython 0.23.4
      • NumPy 1.10.1
      • tqdm
    • OpenCV 3.0.0
    • lmdb 0.87
    • Boost 1.59.0
    • Boost.NumPy (26aaa5b)

    Build Libraries

    OpenCV 3.0.0

    $ wget https://github.com/Itseez/opencv/archive/3.0.0.zip
    $ unzip 3.0.0.zip && rm -rf 3.0.0.zip
    $ cd opencv-3.0.0 && mkdir build && cd build
    $ bash $SSAI_HOME/shells/build_opencv.sh
    $ make -j32 install
    

    If some libraries are missing, do below before compiling 3.0.0.

    $ sudo apt-get install -y libopencv-dev libtbb-dev
    

    Boost 1.59. 0

    $ wget http://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.bz2
    $ tar xvf boost_1_59_0.tar.bz2 && rm -rf boost_1_59_0.tar.bz2
    $ cd boost_1_59_0
    $ ./bootstrap.sh
    $ ./b2 -j32 install cxxflags="-I/home/ubuntu/anaconda3/include/python3.5m"
    

    Boost.NumPy

    $ git clone https://github.com/ndarray/Boost.NumPy.git
    $ cd Boost.NumPy && mkdir build && cd build
    $ cmake -DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../
    $ make install
    

    Build utils

    $ cd $SSAI_HOME/scripts/utils
    $ bash build.sh
    

    Create Dataset

    $ bash shells/download.sh
    $ bash shells/create_dataset.sh
    

    Dataset | Training | Validation | Test :-------------: | :------: | :--------: | :---: mass_roads | 8580352 | 108416 | 379456 mass_roads_mini | 1060928 | 30976 | 77440 mass_buildings | 1060928 | 30976 | 77440 mass_merged | 1060928 | 30976 | 77440

    Start Training

    $ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
    nohup python scripts/train.py \
    --seed 0 \
    --gpu 0 \
    --model models/MnihCNN_multi.py \
    --train_ortho_db data/mass_merged/lmdb/train_sat \
    --train_label_db data/mass_merged/lmdb/train_map \
    --valid_ortho_db data/mass_merged/lmdb/valid_sat \
    --valid_label_db data/mass_merged/lmdb/valid_map \
    --dataset_size 1.0 \
    > mnih_multi.log 2>&1 < /dev/null &
    

    Prediction

    python scripts/predict.py \
    --model results/MnihCNN_multi_2016-02-03_03-34-58/MnihCNN_multi.py \
    --param results/MnihCNN_multi_2016-02-03_03-34-58/epoch-400.model \
    --test_sat_dir data/mass_merged/test/sat \
    --channels 3 \
    --offset 8 \
    --gpu 0 &
    

    Evaluation

    $ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py \
    --map_dir data/mass_merged/test/map \
    --result_dir results/MnihCNN_multi_2016-02-03_03-34-58/ma_prediction_400 \
    --channel 3 \
    --offset 8 \
    --relax 3 \
    --steps 1024
    

    Results

    Conventional methods

    Model | Mass. Buildings | Mass. Roads | Mass.Roads-Mini :---------------------------- | :-------------- | :--------------------- | :-------------- MnihCNN | 0.9150 | 0.8873 | N/A MnihCNN + CRF | 0.9211 | 0.8904 | N/A MnihCNN + Post-processing net | 0.9203 | 0.9006 | N/A Single-channel | 0.9503062 | 0.91730195 (epoch 120) | 0.89989258 Single-channel with MA | 0.953766 | 0.91903522 (epoch 120) | 0.902895

    Multi-channel models (epoch = 400, step = 1024)

    Model | Building-channel | Road-channel | Road-channel (fixed) :-------------------------- | :--------------- | :----------- | :------------------- Multi-channel | 0.94346856 | 0.89379946 | 0.9033020025 Multi-channel with MA | 0.95231262 | 0.89971473 | 0.90982972 Multi-channel with CIS | 0.94417078 | 0.89415726 | 0.9039476538 Multi-channel with CIS + MA | 0.95280431 | 0.90071099 | 0.91108087

    Test on urban areas (epoch = 400, step = 1024)

    Model | Building-channel | Road-channel :-------------------------- | :--------------- | :----------- Single-channel with MA | 0.962133 | 0.944748 Multi-channel with MA | 0.962797 | 0.947224 Multi-channel with CIS + MA | 0.964499 | 0.950465

    x0_sigma for inverting feature maps

    159.348674296
    

    After prediction for single MA

    $ bash shells/predict.sh
    $ python scripts/integrate.py --result_dir results --epoch 200 --size 7,60
    $ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py --map_dir data/mass_merged/test/map --result_dir results/integrated_200 --channel 3 --offset 8 --relax 3 --steps 256
    $ PYTHONPATH="." python scripts/eval_urban.py --result_dir results/integrated_200 --test_map_dir data/mass_merged/test/map --steps 256
    

    Pre-trained models and Predicted results

    Reference

    If you use this code for your project, please cite this journal paper:

    Shunta Saito, Takayoshi Yamashita, Yoshimitsu Aoki, "Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks", Journal of Imaging Science and Technology, Vol. 60, No. 1, pp. 10402-1-10402-9, 2015

    Source code(tar.gz)
    Source code(zip)
Owner
Shunta Saito
Ph.D in Engineering, Researcher at Preferred Networks, Inc.
Shunta Saito
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
Publication describing 3 ML examples at NSLS-II and interfacing into Bluesky

Machine learning enabling high-throughput and remote operations at large-scale user facilities. Overview This repository contains the source code and

BNL 4 Sep 24, 2022
A wrapper around SageMaker ML Lineage Tracking extending ML Lineage to end-to-end ML lifecycles, including additional capabilities around Feature Store groups, queries, and other relevant artifacts.

ML Lineage Helper This library is a wrapper around the SageMaker SDK to support ease of lineage tracking across the ML lifecycle. Lineage artifacts in

AWS Samples 12 Nov 01, 2022
Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM) Introduction The average lifetime of the $D^{0}$ me

Son Gyo Jung 1 Dec 17, 2021
LSTM model trained on a small dataset of 3000 names written in PyTorch

LSTM model trained on a small dataset of 3000 names. Model generates names from model by selecting one out of top 3 letters suggested by model at a time until an EOS (End Of Sentence) character is no

Sahil Lamba 1 Dec 20, 2021
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
This repository provides a PyTorch implementation and model weights for HCSC (Hierarchical Contrastive Selective Coding)

HCSC: Hierarchical Contrastive Selective Coding This repository provides a PyTorch implementation and model weights for HCSC (Hierarchical Contrastive

YUANFAN GUO 111 Dec 20, 2022
A Keras implementation of YOLOv4 (Tensorflow backend)

keras-yolo4 请使用更完善的版本: https://github.com/miemie2013/Keras-YOLOv4 Please visit here for more complete model: https://github.com/miemie2013/Keras-YOLOv

384 Nov 29, 2022
[ICLR 2021] Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma

Kaidi Cao 29 Oct 20, 2022
IAUnet: Global Context-Aware Feature Learning for Person Re-Identification

IAUnet This repository contains the code for the paper: IAUnet: Global Context-Aware Feature Learning for Person Re-Identification Ruibing Hou, Bingpe

30 Jul 14, 2022
PiRank: Learning to Rank via Differentiable Sorting

PiRank: Learning to Rank via Differentiable Sorting This repository provides a reference implementation for learning PiRank-based models as described

54 Dec 17, 2022
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

Photogrammetry & Robotics Bonn 314 Dec 30, 2022
Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision

MLP Mixer Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision. Give us a star if you like this repo. Author: Github: bangoc123 Emai

Ngoc Nguyen Ba 86 Dec 10, 2022
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
Feedback is important: response-aware feedback mechanism for background based conversation

RFM The code for the paper: "Feedback is important: response-aware feedback mechanism for background based conversation." Requirements python 3.7 pyto

Jiatao Chen 2 Sep 29, 2022
A new test set for ImageNet

ImageNetV2 The ImageNetV2 dataset contains new test data for the ImageNet benchmark. This repository provides associated code for assembling and worki

186 Dec 18, 2022
FwordCTF 2021 Infrastructure and Source code of Web/Bash challenges

FwordCTF 2021 You can find here the source code of the challenges I wrote (Web and Bash) in FwordCTF 2021 and the source code of the platform with our

Kahla 5 Nov 25, 2022
Learning kernels to maximize the power of MMD tests

Code for the paper "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy" (arXiv:1611.04488; published at ICLR 2017), by Douga

Danica J. Sutherland 201 Dec 17, 2022
NAACL2021 - COIL Contextualized Lexical Retriever

COIL Repo for our NAACL paper, COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. The code covers learning

Luyu Gao 108 Dec 31, 2022
Real-time object detection on Android using the YOLO network with TensorFlow

TensorFlow YOLO object detection on Android Source project android-yolo is the first implementation of YOLO for TensorFlow on an Android device. It is

Nataniel Ruiz 624 Jan 03, 2023