Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Overview

Differentiable Neural Computers and family, for Pytorch

Includes:

  1. Differentiable Neural Computers (DNC)
  2. Sparse Access Memory (SAM)
  3. Sparse Differentiable Neural Computers (SDNC)

Build Status PyPI version

This is an implementation of Differentiable Neural Computers, described in the paper Hybrid computing using a neural network with dynamic external memory, Graves et al. and Sparse DNCs (SDNCs) and Sparse Access Memory (SAM) described in Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes.

Install

pip install dnc

From source

git clone https://github.com/ixaxaar/pytorch-dnc
cd pytorch-dnc
pip install -r ./requirements.txt
pip install -e .

For using fully GPU based SDNCs or SAMs, install FAISS:

conda install faiss-gpu -c pytorch

pytest is required to run the test

Architecure

Usage

DNC

Constructor Parameters:

Following are the constructor parameters:

Following are the constructor parameters:

Argument Default Description
input_size None Size of the input vectors
hidden_size None Size of hidden units
rnn_type 'lstm' Type of recurrent cells used in the controller
num_layers 1 Number of layers of recurrent units in the controller
num_hidden_layers 2 Number of hidden layers per layer of the controller
bias True Bias
batch_first True Whether data is fed batch first
dropout 0 Dropout between layers in the controller
bidirectional False If the controller is bidirectional (Not yet implemented
nr_cells 5 Number of memory cells
read_heads 2 Number of read heads
cell_size 10 Size of each memory cell
nonlinearity 'tanh' If using 'rnn' as rnn_type, non-linearity of the RNNs
gpu_id -1 ID of the GPU, -1 for CPU
independent_linears False Whether to use independent linear units to derive interface vector
share_memory True Whether to share memory between controller layers

Following are the forward pass parameters:

Argument Default Description
input - The input vector (B*T*X) or (T*B*X)
hidden (None,None,None) Hidden states (controller hidden, memory hidden, read vectors)
reset_experience False Whether to reset memory
pass_through_memory True Whether to pass through memory

Example usage

from dnc import DNC

rnn = DNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  gpu_id=0
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors) = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Debugging

The debug option causes the network to return its memory hidden vectors (numpy ndarrays) for the first batch each forward step. These vectors can be analyzed or visualized, using visdom for example.

from dnc import DNC

rnn = DNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  gpu_id=0,
  debug=True
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors), debug_memory = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Memory vectors returned by forward pass (np.ndarray):

Key Y axis (dimensions) X axis (dimensions)
debug_memory['memory'] layer * time nr_cells * cell_size
debug_memory['link_matrix'] layer * time nr_cells * nr_cells
debug_memory['precedence'] layer * time nr_cells
debug_memory['read_weights'] layer * time read_heads * nr_cells
debug_memory['write_weights'] layer * time nr_cells
debug_memory['usage_vector'] layer * time nr_cells

SDNC

Constructor Parameters:

Following are the constructor parameters:

Argument Default Description
input_size None Size of the input vectors
hidden_size None Size of hidden units
rnn_type 'lstm' Type of recurrent cells used in the controller
num_layers 1 Number of layers of recurrent units in the controller
num_hidden_layers 2 Number of hidden layers per layer of the controller
bias True Bias
batch_first True Whether data is fed batch first
dropout 0 Dropout between layers in the controller
bidirectional False If the controller is bidirectional (Not yet implemented
nr_cells 5000 Number of memory cells
read_heads 4 Number of read heads
sparse_reads 4 Number of sparse memory reads per read head
temporal_reads 4 Number of temporal reads
cell_size 10 Size of each memory cell
nonlinearity 'tanh' If using 'rnn' as rnn_type, non-linearity of the RNNs
gpu_id -1 ID of the GPU, -1 for CPU
independent_linears False Whether to use independent linear units to derive interface vector
share_memory True Whether to share memory between controller layers

Following are the forward pass parameters:

Argument Default Description
input - The input vector (B*T*X) or (T*B*X)
hidden (None,None,None) Hidden states (controller hidden, memory hidden, read vectors)
reset_experience False Whether to reset memory
pass_through_memory True Whether to pass through memory

Example usage

from dnc import SDNC

rnn = SDNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  sparse_reads=4,
  batch_first=True,
  gpu_id=0
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors) = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Debugging

The debug option causes the network to return its memory hidden vectors (numpy ndarrays) for the first batch each forward step. These vectors can be analyzed or visualized, using visdom for example.

from dnc import SDNC

rnn = SDNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  sparse_reads=4,
  temporal_reads=4,
  gpu_id=0,
  debug=True
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors), debug_memory = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Memory vectors returned by forward pass (np.ndarray):

Key Y axis (dimensions) X axis (dimensions)
debug_memory['memory'] layer * time nr_cells * cell_size
debug_memory['visible_memory'] layer * time sparse_reads+2*temporal_reads+1 * nr_cells
debug_memory['read_positions'] layer * time sparse_reads+2*temporal_reads+1
debug_memory['link_matrix'] layer * time sparse_reads+2temporal_reads+1 * sparse_reads+2temporal_reads+1
debug_memory['rev_link_matrix'] layer * time sparse_reads+2temporal_reads+1 * sparse_reads+2temporal_reads+1
debug_memory['precedence'] layer * time nr_cells
debug_memory['read_weights'] layer * time read_heads * nr_cells
debug_memory['write_weights'] layer * time nr_cells
debug_memory['usage'] layer * time nr_cells

SAM

Constructor Parameters:

Following are the constructor parameters:

Argument Default Description
input_size None Size of the input vectors
hidden_size None Size of hidden units
rnn_type 'lstm' Type of recurrent cells used in the controller
num_layers 1 Number of layers of recurrent units in the controller
num_hidden_layers 2 Number of hidden layers per layer of the controller
bias True Bias
batch_first True Whether data is fed batch first
dropout 0 Dropout between layers in the controller
bidirectional False If the controller is bidirectional (Not yet implemented
nr_cells 5000 Number of memory cells
read_heads 4 Number of read heads
sparse_reads 4 Number of sparse memory reads per read head
cell_size 10 Size of each memory cell
nonlinearity 'tanh' If using 'rnn' as rnn_type, non-linearity of the RNNs
gpu_id -1 ID of the GPU, -1 for CPU
independent_linears False Whether to use independent linear units to derive interface vector
share_memory True Whether to share memory between controller layers

Following are the forward pass parameters:

Argument Default Description
input - The input vector (B*T*X) or (T*B*X)
hidden (None,None,None) Hidden states (controller hidden, memory hidden, read vectors)
reset_experience False Whether to reset memory
pass_through_memory True Whether to pass through memory

Example usage

from dnc import SAM

rnn = SAM(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  sparse_reads=4,
  batch_first=True,
  gpu_id=0
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors) = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Debugging

The debug option causes the network to return its memory hidden vectors (numpy ndarrays) for the first batch each forward step. These vectors can be analyzed or visualized, using visdom for example.

from dnc import SAM

rnn = SAM(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  sparse_reads=4,
  gpu_id=0,
  debug=True
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors), debug_memory = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Memory vectors returned by forward pass (np.ndarray):

Key Y axis (dimensions) X axis (dimensions)
debug_memory['memory'] layer * time nr_cells * cell_size
debug_memory['visible_memory'] layer * time sparse_reads+2*temporal_reads+1 * nr_cells
debug_memory['read_positions'] layer * time sparse_reads+2*temporal_reads+1
debug_memory['read_weights'] layer * time read_heads * nr_cells
debug_memory['write_weights'] layer * time nr_cells
debug_memory['usage'] layer * time nr_cells

Tasks

Copy task (with curriculum and generalization)

The copy task, as descibed in the original paper, is included in the repo.

From the project root:

python ./tasks/copy_task.py -cuda 0 -optim rmsprop -batch_size 32 -mem_slot 64 # (like original implementation)

python ./tasks/copy_task.py -cuda 0 -lr 0.001 -rnn_type lstm -nlayer 1 -nhlayer 2 -dropout 0 -mem_slot 32 -batch_size 1000 -optim adam -sequence_max_length 8 # (faster convergence)

For SDNCs:
python ./tasks/copy_task.py -cuda 0 -lr 0.001 -rnn_type lstm -memory_type sdnc -nlayer 1 -nhlayer 2 -dropout 0 -mem_slot 100 -mem_size 10  -read_heads 1 -sparse_reads 10 -batch_size 20 -optim adam -sequence_max_length 10

and for curriculum learning for SDNCs:
python ./tasks/copy_task.py -cuda 0 -lr 0.001 -rnn_type lstm -memory_type sdnc -nlayer 1 -nhlayer 2 -dropout 0 -mem_slot 100 -mem_size 10  -read_heads 1 -sparse_reads 4 -temporal_reads 4 -batch_size 20 -optim adam -sequence_max_length 4 -curriculum_increment 2 -curriculum_freq 10000

For the full set of options, see:

python ./tasks/copy_task.py --help

The copy task can be used to debug memory using Visdom.

Additional step required:

pip install visdom
python -m visdom.server

Open http://localhost:8097/ on your browser, and execute the copy task:

python ./tasks/copy_task.py -cuda 0

The visdom dashboard shows memory as a heatmap for batch 0 every -summarize_freq iteration:

Visdom dashboard

Generalizing Addition task

The adding task is as described in this github pull request. This task

  • creates one-hot vectors of size input_size, each representing a number
  • feeds a sentence of them to a network
  • the output of which is added to get the sum of the decoded outputs

The task first trains the network for sentences of size ~100, and then tests if the network genetalizes for lengths ~1000.

python ./tasks/adding_task.py -cuda 0 -lr 0.0001 -rnn_type lstm -memory_type sam -nlayer 1 -nhlayer 1 -nhid 100 -dropout 0 -mem_slot 1000 -mem_size 32 -read_heads 1 -sparse_reads 4 -batch_size 20 -optim rmsprop -input_size 3 -sequence_max_length 100

Generalizing Argmax task

The second adding task is similar to the first one, except that the network's output at the last time step is expected to be the argmax of the input.

python ./tasks/argmax_task.py -cuda 0 -lr 0.0001 -rnn_type lstm -memory_type dnc -nlayer 1 -nhlayer 1 -nhid 100 -dropout 0 -mem_slot 100 -mem_size 10 -read_heads 2 -batch_size 1 -optim rmsprop -sequence_max_length 15 -input_size 10 -iterations 10000

Code Structure

  1. DNCs:
  1. SDNCs:
  1. SAMs:
  1. Tests:

General noteworthy stuff

  1. SDNCs use the FLANN approximate nearest neigbhour library, with its python binding pyflann3 and FAISS.

FLANN can be installed either from pip (automatically as a dependency), or from source (e.g. for multithreading via OpenMP):

# install openmp first: e.g. `sudo pacman -S openmp` for Arch.
git clone git://github.com/mariusmuja/flann.git
cd flann
mkdir build
cd build
cmake ..
make -j 4
sudo make install

FAISS can be installed using:

conda install faiss-gpu -c pytorch

FAISS is much faster, has a GPU implementation and is interoperable with pytorch tensors. We try to use FAISS by default, in absence of which we fall back to FLANN.

  1. nans in the gradients are common, try with different batch sizes

Repos referred to for creation of this repo:

Comments
  • copy_task.py sample fails.

    copy_task.py sample fails.

    testing with command line:

    python copy_task.py -cuda 0 -lr 0.001 -rnn_type lstm -nlayer 1 -nhlayer 2 -dropout 0 -mem_slot 32 -batch_size 1000 -optim adam -sequence_max_length 8 -iterations 100

    I get multiple errors when it finishes, first on the generate_data call which has undefined parameters:

    input_data, target_output, loss_weights = generate_data(random_length, input_size)
    

    NameError: name 'input_size' is not defined

    And then after fixing that I get: output = output[:, -1, :].sum().data.cpu().numpy()[0] IndexError: too many indices for array

    Looks like that bit of code hasn't been used. I have tried to fix it but I'm unclear of the solution for the second issue as I'm new to pytorch, thanks in advance for any fixes.

    ChrisP.

    opened by chrispugmire 3
  • Problem of the Softmax on Read Mode

    Problem of the Softmax on Read Mode

    https://github.com/ixaxaar/pytorch-dnc/blob/1db78511fe5622ade1c554d265a5d9d729c8801d/dnc/memory.py#L235

    Should the softmax be applied on the last dimension? (i.e. the dimension of the read mode)

    Currently, each read mode would always return 1 if the model has only one read head.

    opened by yat011 3
  • Question about the running speed of Pyflann and Faiss  for the SAM model

    Question about the running speed of Pyflann and Faiss for the SAM model

    I can't normally install Faiss environment because of certain force majeure, so I wonder how using Faiss-gpu or Pyflann will influence the actual training speed of the SAM model. Let's say for example, in the copy task, how are the actual epoch times when using these two methods? Can you give me a rough reference?

    opened by zoharli 3
  • PySide dependency error

    PySide dependency error

    I followed your instructions to run and visualize copy_task.py in visdom, but am encountering some dependency errors. I am using Python 3.6.

    First error when running python ./tasks/copy_task.py -cuda 0:

    File "C:\Users\alexander.d.payne\AppData\Local\Programs\Python\Python36\lib\site-packages\pyflann\bindings\flann_ctypes.py", line 171, in <module> raise ImportError('Cannot load dynamic library. Did you compile FLANN?') ImportError: Cannot load dynamic library. Did you compile FLANN?

    pip installed pyflann, and ran again:

    File "C:\Users\alexander.d.payne\AppData\Local\Programs\Python\Python36\lib\site-packages\pyflann\__init__.py", line 27, in <module> from index import * ModuleNotFoundError: No module named 'index'

    pip installed index, and ran again:

    C:\Users\alexander.d.payne\Documents\pytorch-dnc-master>pip install index Collecting index Downloading ...files.pythonhosted.org/packages/7f/59/65da893e04f3eb49f73e6770e0999c57230669a484b14ca574154e9b75d3/index-0.2.tar.gz Collecting PySide (from index) Downloading ...files.pythonhosted.org/packages/36/ac/ca31db6f2225844d37a41b10615c3d371587677efd074db29855e7035de6/PySide-1.2.4.tar.gz (9.3MB) 100% |████████████████████████████████| 9.3MB 3.2MB/s Complete output from command python setup.py egg_info: only these python versions are supported: [(2, 6), (2, 7), (3, 2), (3, 3), (3, 4)] Command "python setup.py egg_info" failed with error code 1 in C:\Users\ALEXAN~1.PAY\AppData\Local\Temp\pip-install-aph21f59\PySide\

    Even if I switched to Python 3.4 I don't see a torch option for that version at their website https://pytorch.org/, is there anyway around this? Thank you.

    opened by apayne19 3
  • Issues when using pytorch 0.4

    Issues when using pytorch 0.4

    I get errors when trying to run both DNC and SDNC examples with pytorch 0.4.0. For DNC:

    (py36) [[email protected] test]$ python test_dnc.py 
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
      from ._conv import register_converters as _register_converters
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:118: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal_.
      orthogonal(self.output.weight)
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:133: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
      xavier_uniform(h)
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/util.py:95: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
      soft_max_2d = F.softmax(input_2d)
    Traceback (most recent call last):
      File "test_dnc.py", line 19, in <module>
        rnn(torch.randn(10, 4, 64).cuda(), (controller_hidden, memory, read_vectors), True)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 265, in forward
        inputs = [self.output(i) for i in inputs]
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 265, in <listcomp>
        inputs = [self.output(i) for i in inputs]
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
        return F.linear(input, self.weight, self.bias)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 992, in linear
        return torch.addmm(bias, input, weight.t())
    RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 'mat1'
    

    For SDNC:

    (py36) [[email protected] test]$ python test_dnc.py 
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
      from ._conv import register_converters as _register_converters
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:118: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal_.
      orthogonal(self.output.weight)
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/sparse_temporal_memory.py:65: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal_.
      T.nn.init.orthogonal(self.interface_weights.weight)
    /amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:133: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
      xavier_uniform(h)
    Traceback (most recent call last):
      File "test_dnc.py", line 20, in <module>
        rnn(torch.randn(10, 4, 64).cuda(), (controller_hidden, memory, read_vectors), True)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 219, in forward
        controller_hidden, mem_hidden, last_read = self._init_hidden(hx, batch_size, reset_experience)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 144, in _init_hidden
        mhx = self.memories[0].reset(batch_size, erase=reset_experience)
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/sparse_temporal_memory.py", line 126, in reset
        'read_positions': cuda(T.arange(0, c).expand(b, c), gpu_id=self.gpu_id).long()
      File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/util.py", line 30, in cuda
        return var(x.pin_memory(), requires_grad=grad).cuda(gpu_id, async=True)
    RuntimeError: invalid argument 3: Source tensor must be contiguous at /opt/conda/conda-bld/pytorch_1524590031827/work/aten/src/THC/generic/THCTensorCopy.c:114
    
    opened by Rahim16 3
  • TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:

    TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:

    I'm trying to run your example for SAM, but I'm running into the following error:

    Traceback (most recent call last):
      File "dnc_test.py", line 20, in <module>
        rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)
      File "/home/xxx/miniconda3/envs/my_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/xxx/git_repos/pytorch-dnc/dnc/dnc.py", line 222, in forward
        inputs = [T.cat([input[:, x, :], last_read], 1) for x in range(max_length)]
      File "/home/xxx/git_repos/pytorch-dnc/dnc/dnc.py", line 222, in <listcomp>
        inputs = [T.cat([input[:, x, :], last_read], 1) for x in range(max_length)]
    TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:
     * (sequence[torch.cuda.FloatTensor] seq)
     * (sequence[torch.cuda.FloatTensor] seq, int dim)
          didn't match because some of the arguments have invalid types: (list, int)
    

    I tried loading the input tensor onto the gpu with .cuda() and transforming last_read to a Tensor with .data, but that led to other issues.

    There's also a typo in your documentation:

    output, (controller_hidden, memory, read_vectors) = \
      rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors, reset_experience=True))
    

    should be

    output, (controller_hidden, memory, read_vectors) = \
      rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)
    
    opened by kierkegaard13 3
  • bug in cosine distance?

    bug in cosine distance?

    I believe there's a bug in the function from dnc.util for computing cosine distance. First, I think you are trying to compute cosine similarity, not distance (sim = 1 - dist). Second, I think the current function implements neither cosine similarity nor distance. Here's a modified variant that returns the correct output for cosine similarity.

    def bcos(a, b, normBy=2):
        """Batchwise cosine similarity
        
        Arguments:
            a: 3D tensor of shape [b,m,w]
            b: 3D tensor of shape [b,r,w]
        Returns:
            cos: batchwise cosine similarity of shape [b,r,m]
        """
        dot = torch.bmm(a, b.transpose(1,2)) # [b,m,w] @ [b,w,r] -> [b,m,r]
        a_norm = torch.norm(a, normBy, dim=2).unsqueeze(2) # [b,m,1]
        b_norm = torch.norm(b, normBy, dim=2).unsqueeze(1) # [b,1,r]
        cos = dot / (a_norm * b_norm) # [b,m,r]
    
        return cos.transpose(1,2)  # [b,r,m]
    
    opened by rfeinman 2
  • reset_experience meaning

    reset_experience meaning

    Looking at the code it isn't clear to me what reset_experience does. Since the memory is an argument to the forward, is setting reset_experience equivalent to calling dnc(controller_state, None, read_vectors)?

    If this is the case, then when using dnc with inputs of shape [batch, time, feature] , reset_experience will clear the memory between batches. If we continuously call dnc with inputs of shape [batch, 1, feature], then we do not want to reset. Is this correct?

    opened by smorad 1
  • fix bug in function \theta for batchwise cosine similarity

    fix bug in function \theta for batchwise cosine similarity

    I wasn't able to keep the options "dimA" and "dimB" for non-default similarity dimensions, but I don't see those used anywhere in the repo.

    opened by rfeinman 1
  • Error when running copy_task.py

    Error when running copy_task.py

    I executed copy task as following the cmd line in README. python copy_task.py -cuda 0 -optim rmsprop -batch_size 32 -mem_slot 64 Having NameError at line 366, name 'input_size' is not defined. I can change the input_size to args.input_size, but I think there are additional problems other than that. Function generate_data at requires 3 arguments, but only 2 are given. Is generate_data in line 366 a different function defined in line 78?

    https://github.com/ixaxaar/pytorch-dnc/blob/016b541223bf801f3f3a617fa3942cc12ef71be9/tasks/copy_task.py#L78 https://github.com/ixaxaar/pytorch-dnc/blob/016b541223bf801f3f3a617fa3942cc12ef71be9/tasks/copy_task.py#L366

    Thank you,

    opened by jin8 1
  • When running adding task -- ModuleNotFoundError: No module named 'index'

    When running adding task -- ModuleNotFoundError: No module named 'index'

      File "tasks/adding_task.py", line 25, in <module>
        from dnc.dnc import DNC
      File "/home/vanangamudi/projects/cloned/pytorch-dnc/dnc/__init__.py", line 5, in <module>
        from .sdnc import SDNC
      File "/home/vanangamudi/projects/cloned/pytorch-dnc/dnc/sdnc.py", line 15, in <module>
        from .sparse_temporal_memory import SparseTemporalMemory
      File "/home/vanangamudi/projects/cloned/pytorch-dnc/dnc/sparse_temporal_memory.py", line 11, in <module>
        from .flann_index import FLANNIndex
      File "/home/vanangamudi/projects/cloned/pytorch-dnc/dnc/flann_index.py", line 9, in <module>
        from pyflann import *
      File "/home/vanangamudi/env/torch/lib/python3.6/site-packages/pyflann/__init__.py", line 27, in <module>
        from index import *
    ModuleNotFoundError: No module named 'index'
    
    opened by vanangamudi 1
  • pytorch LTS support (1.8.2) or stable (1.11.1)

    pytorch LTS support (1.8.2) or stable (1.11.1)

    Hello!

    I was wondering if someone can confirm that this package still runs under pytroch lts or current stable (1.11.1)?

    I'm getting a curious error. Note this is for CPU training. Maybe someone can confirm this is only broken under cpu training.

    Thank you!

    `03:44 $ python ./tasks/adding_task.py -lr 0.0001 -rnn_type lstm -memory_type sam -nlayer 1 -nhlayer 1 -nhid 100 -dropout 0 -mem_slot 1000 -mem_size 32 -read_heads 1 -sparse_reads 4 -batch_size 20 -optim rmsprop -input_size 3 -sequence_max_length 100 Namespace(batch_size=20, check_freq=100, clip=50, cuda=-1, dropout=0.0, input_size=3, iterations=2000, lr=0.0001, mem_size=32, mem_slot=1000, memory_type='sam', nhid=100, nhlayer=1, nlayer=1, optim='rmsprop', read_heads=1, rnn_type='lstm', sequence_max_length=100, sparse_reads=4, summarize_freq=100, temporal_reads=2, visdom=False) Using CPU.


    SAM(3, 100, num_hidden_layers=1, nr_cells=1000, read_heads=1, cell_size=32) SAM( (lstm_layer_0): LSTM(35, 100, batch_first=True) (rnn_layer_memory_shared): SparseMemory( (interface_weights): Linear(in_features=100, out_features=70, bias=True) ) (output): Linear(in_features=132, out_features=3, bias=True) )

    Iteration 0/2000 Falling back to FLANN (CPU). For using faster, GPU based indexes, install FAISS: "conda install faiss-gpu -c pytorch" Traceback (most recent call last): File "./tasks/adding_task.py", line 222, in loss.backward() File "/home/eziegenbalg/.conda/envs/default/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/eziegenbalg/.conda/envs/default/lib/python3.8/site-packages/torch/autograd/init.py", line 145, in backward Variable._execution_engine.run_backward( RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 1000]], which is output 0 of AsStridedBackward, is at version 70; expected version 69 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

    ^C (default) ✘-INT ~/pytorch-dnc [master|✚ 2] 03:45 $ `

    opened by ziegenbalg 5
  • Question about allocation weighting

    Question about allocation weighting

    The paper describes the allocation weighting vector as:

    $a_t[\phi_t[j]] = (1 - u_t[\phi_t[j]]) \prod u_t[\psi_t[j]] $

    In your part of the code where you calculate the right-part product you do this:

    v = var(sorted_usage.data.new(batch_size, 1).fill_(1))
    cat_sorted_usage = T.cat((v, sorted_usage), 1)
    prod_sorted_usage = T.cumprod(cat_sorted_usage, 1)[:, :-1]
    

    Why do you create the var "v", which contains "ones", and concatenate it? This does not seem the same as in the paper.

    Thanks, Peter

    opened by PeterDeWachter1998 0
  • A question about memory initialization.

    A question about memory initialization.

    Hi,

    I am a bit confused about how we save memory states in DNC. To be more specific, at the starting point of training, we have to initialize the memory with no doubt (fill all 0s in code). Having finished the training process, I think the memory values should be saved for testing usages. But it turns out that you reset the memory hidden states to be 0s AGAIN! (just as the erase part of dnc/memory.py,Line 69-75).

    Could you please give me some explanations about this? Thank you in advance! Really need your help.

    opened by LiUzHiAn 1
  • batch_first argument doesn’t work

    batch_first argument doesn’t work

    Hi, I just want to let you know that with current implementation (file dnc.py, line 76:86), the batch_first will always be True. It is trivial but sometime troublesome. Have a nice day.

    opened by Trungmaster5 1
  • Unresolved reference 'output'

    Unresolved reference 'output'

    https://github.com/ixaxaar/pytorch-dnc/blob/016b541223bf801f3f3a617fa3942cc12ef71be9/dnc/dnc.py#L270

    The variable named 'output' (as can be seen above) is an unresolved reference. Kindly, please fix it.

    opened by denizetkar 1
Releases(1.0.1)
Owner
ixaxaar
ixaxaar
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
This is a repository with the code for the ACL 2019 paper

The Story of Heads This is the official repo for the following papers: (ACL 2019) Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy

231 Nov 15, 2022
PyTorch Implement for Path Attention Graph Network

SPAGAN in PyTorch This is a PyTorch implementation of the paper "SPAGAN: Shortest Path Graph Attention Network" Prerequisites We prefer to create a ne

Yang Yiding 38 Dec 28, 2022
Vignette is a face tracking software for characters using osu!framework.

Vignette is a face tracking software for characters using osu!framework. Unlike most solutions, Vignette is: Made with osu!framework, the game framewo

Vignette 412 Dec 28, 2022
Related resources for our EMNLP 2021 paper

Plan-then-Generate: Controlled Data-to-Text Generation via Planning Authors: Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier Code

Yixuan Su 61 Jan 03, 2023
Keras attention models including botnet,CoaT,CoAtNet,CMT,cotnet,halonet,resnest,resnext,resnetd,volo,mlp-mixer,resmlp,gmlp,levit

Keras_cv_attention_models Keras_cv_attention_models Usage Basic Usage Layers Model surgery AotNet ResNetD ResNeXt ResNetQ BotNet VOLO ResNeSt HaloNet

319 Dec 28, 2022
Fashion Entity Classification

Fashion-Entity-Classification - Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grays

ADITYA SHAH 1 Jan 04, 2022
Perfect implement. Model shared. x0.5 (Top1:60.646) and 1.0x (Top1:69.402).

Shufflenet-v2-Pytorch Introduction This is a Pytorch implementation of faceplusplus's ShuffleNet-v2. For details, please read the following papers:

423 Dec 07, 2022
BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalanced Tongue Data

Balanced-Evolutionary-Semi-Stacking Code for the paper ''BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalan

0 Jan 16, 2022
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 207 Dec 30, 2022
GPU Accelerated Non-rigid ICP for surface registration

GPU Accelerated Non-rigid ICP for surface registration Introduction Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve

Haozhe Wu 144 Jan 04, 2023
🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

Advanced Image Manipulation Lab @ Samsung AI Center Moscow 4.7k Dec 31, 2022
PyTorch Code for "Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning"

Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning [Project Page] [Paper] Wenlong Huang1, Igor Mordatch2, Pieter Abbeel1,

Wenlong Huang 40 Nov 22, 2022
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)

Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021) This repository contains the code for our ICCV2021 paper by Jia-Ren Cha

Jia-Ren Chang 40 Dec 27, 2022
MMRazor: a model compression toolkit for model slimming and AutoML

Documentation: https://mmrazor.readthedocs.io/ English | 简体中文 Introduction MMRazor is a model compression toolkit for model slimming and AutoML, which

OpenMMLab 899 Jan 02, 2023
基于PaddleClas实现垃圾分类,并转换为inference格式用PaddleHub服务端部署

百度网盘链接及提取码: 链接:https://pan.baidu.com/s/1HKpgakNx1hNlOuZJuW6T1w 提取码:wylx 一个垃圾分类项目带你玩转飞桨多个产品(1) 基于PaddleClas实现垃圾分类,导出inference模型并利用PaddleHub Serving进行服务

thomas-yanxin 22 Jul 12, 2022
Magisk module to enable hidden features on Android 12 Developer Preview 1.

Android 12 Extensions This is a Magisk module that enables hidden features on Android 12 Developer Preview 1. Features Scrolling screenshots Wallpaper

Danny Lin 384 Jan 06, 2023
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Vector Quantized Diffusion Model for Text-to-Image Synthesis

Vector Quantized Diffusion Model for Text-to-Image Synthesis Due to company policy, I have to set microsoft/VQ-Diffusion to private for now, so I prov

Shuyang Gu 294 Jan 05, 2023
Kohei's 5th place solution for xview3 challenge

xview3-kohei-solution Usage This repository assumes that the given data set is stored in the following locations: $ ls data/input/xview3/*.csv data/in

Kohei Ozaki 2 Jan 17, 2022