A PyTorch-centric hybrid classical-quantum machine learning framework

Overview

torchquantum

A PyTorch-centric hybrid classical-quantum dynamic neural networks framework.

MIT License

News

  • Add a simple example script using quantum gates to do MNIST classification.
  • v0.0.1 available. Feedbacks are highly welcomed!

Installation

git clone https://github.com/Hanrui-Wang/pytorch-quantum.git
cd pytorch-quantum
pip install --editable .

Usage

Construct quantum NN models as simple as constructing a normal pytorch model.

import torch.nn as nn
import torch.nn.functional as F 
import torchquantum as tq
import torchquantum.functional as tqf

class QFCModel(nn.Module):
  def __init__(self):
    super().__init__()
    self.n_wires = 4
    self.q_device = tq.QuantumDevice(n_wires=self.n_wires)
    self.measure = tq.MeasureAll(tq.PauliZ)
    
    self.encoder_gates = [tqf.rx] * 4 + [tqf.ry] * 4 + \
                         [tqf.rz] * 4 + [tqf.rx] * 4
    self.rx0 = tq.RX(has_params=True, trainable=True)
    self.ry0 = tq.RY(has_params=True, trainable=True)
    self.rz0 = tq.RZ(has_params=True, trainable=True)
    self.crx0 = tq.CRX(has_params=True, trainable=True)

  def forward(self, x):
    bsz = x.shape[0]
    # down-sample the image
    x = F.avg_pool2d(x, 6).view(bsz, 16)
    
    # reset qubit states
    self.q_device.reset_states(bsz)
    
    # encode the classical image to quantum domain
    for k, gate in enumerate(self.encoder_gates):
      gate(self.q_device, wires=k % self.n_wires, params=x[:, k])
    
    # add some trainable gates (need to instantiate ahead of time)
    self.rx0(self.q_device, wires=0)
    self.ry0(self.q_device, wires=1)
    self.rz0(self.q_device, wires=3)
    self.crx0(self.q_device, wires=[0, 2])
    
    # add some more non-parameterized gates (add on-the-fly)
    tqf.hadamard(self.q_device, wires=3)
    tqf.sx(self.q_device, wires=2)
    tqf.cnot(self.q_device, wires=[3, 0])
    tqf.qubitunitary(self.q_device0, wires=[1, 2], params=[[1, 0, 0, 0],
                                                           [0, 1, 0, 0],
                                                           [0, 0, 0, 1j],
                                                           [0, 0, -1j, 0]])
    
    # perform measurement to get expectations (back to classical domain)
    x = self.measure(self.q_device).reshape(bsz, 2, 2)
    
    # classification
    x = x.sum(-1).squeeze()
    x = F.log_softmax(x, dim=1)

    return x

Features

  • Easy construction of parameterized quantum circuits in PyTorch.
  • Support batch mode inference and training on CPU/GPU.
  • Support dynamic computation graph for easy debugging.
  • Support easy deployment on real quantum devices such as IBMQ.

TODOs

  • Support more gates
  • Support compile a unitary with descriptions to speedup training
  • Support other measurements other than analytic method
  • In einsum support multiple qubit sharing one letter. So that more than 26 qubit can be simulated.
  • Support bmm based implementation to solve scalability issue
  • Support conversion from torchquantum to qiskit

Dependencies

  • Python >= 3.7
  • PyTorch >= 1.8.0
  • configargparse >= 0.14
  • GPU model training requires NVIDIA GPUs

MNIST Example

Train a quantum circuit to perform MNIST task and deploy on the real IBM Yorktown quantum computer as in mnist_example.py script:

python mnist_example.py

Files

File Description
devices.py QuantumDevice class which stores the statevector
encoding.py Encoding layers to encode classical values to quantum domain
functional.py Quantum gate functions
operators.py Quantum gate classes
layers.py Layer templates such as RandomLayer
measure.py Measurement of quantum states to get classical values
graph.py Quantum gate graph used in static mode
super_layer.py Layer templates for SuperCircuits
plugins/qiskit* Convertors and processors for easy deployment on IBMQ
examples/ More examples for training QML and VQE models

More Examples

The examples/ folder contains more examples to train the QML and VQE models. Example usage for a QML circuit:

# train the circuit with 36 params in the U3+CU3 space
python examples/train.py examples/configs/mnist/four0123/train/baseline/u3cu3_s0/rand/param36.yml

# evaluate the circuit with torchquantum
python examples/eval.py examples/configs/mnist/four0123/eval/tq/all.yml --run-dir=runs/mnist.four0123.train.baseline.u3cu3_s0.rand.param36

# evaluate the circuit with real IBMQ-Yorktown quantum computer
python examples/eval.py examples/configs/mnist/four0123/eval/x2/real/opt2/300.yml --run-dir=runs/mnist.four0123.train.baseline.u3cu3_s0.rand.param36

Example usage for a VQE circuit:

# Train the VQE circuit for h2
python examples/train.py examples/configs/vqe/h2/train/baseline/u3cu3_s0/human/param12.yml

# evaluate the VQE circuit with torchquantum
python examples/eval.py examples/configs/vqe/h2/eval/tq/all.yml --run-dir=runs/vqe.h2.train.baseline.u3cu3_s0.human.param12/

# evaluate the VQE circuit with real IBMQ-Yorktown quantum computer
python examples/eval.py examples/configs/vqe/h2/eval/x2/real/opt2/all.yml --run-dir=runs/vqe.h2.train.baseline.u3cu3_s0.human.param12/

Detailed documentations coming soon.

Contact

Hanrui Wang ([email protected])

Comments
  • Cannot use qiskit simulation when running mnist_example.py

    Cannot use qiskit simulation when running mnist_example.py

    I tried to run mnist_example.py with IBM Q token already set. But I ran into trouble when doing qiskit simulation. The line is

    valid_test(dataflow, 'test', model, device, qiskit=True)
    

    I think the execution should be fast, but it got stuck after the following messages:

    Test with Qiskit Simulator
    [2022-03-22 22:36:14.573] Before transpile: {'depth': 32, 'size': 77, 'width': 8, 'n_single_gates': 62, 'n_two_gates': 11, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 24, 'rx': 17, 'cx': 10, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
    [2022-03-22 22:36:14.864] After transpile: {'depth': 23, 'size': 49, 'width': 8, 'n_single_gates': 33, 'n_two_gates': 12, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 8, 'rz': 7, 'rx': 4, 'cx': 12, 'u1': 2, 'u3': 11, 'u2': 1, 'measure': 4}}
    

    I interrupted the program using ctrl+c after 2-3 minutes, getting a very long error log for interrupting multiprocessing. I want to know what causes that trouble and how to deal with it.

    Thank!


    I am using torchquantum master branch, qiskit 0.19.2.

    error_log.txt

    opened by royess 10
  • Always print “The qiskit parameterization bug is already fixed!” when running mnist_example.py

    Always print “The qiskit parameterization bug is already fixed!” when running mnist_example.py

    I tried to run the minist_example.py. And I commented out the later part about Qiskit.

    I tried to run it with the following command:

    python mnist_example.py --epochs 1

    But always print "The qiskit parameterization bug is already fixed!" in the terminal. I wish I had a way to stop printing, but I haven't found one yet.

    Thank!

    print_log.txt

    opened by ex193170 5
  • Testing of mnist_example_no_binding.py file produces error

    Testing of mnist_example_no_binding.py file produces error

    Hi, I tried testing the mnist_example_no_binding.py file. I keep getting the below error:

      File "C:\Users\manuc\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\__init__.py", line 126, in <module>
        raise err
    OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\manuc\AppData\Local\Programs\Python\Python39\
    lib\site-packages\torch\lib\cusparse64_11.dll" or one of its dependencies.
    Traceback (most recent call last):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "<string>", line 1, in <module>
    
    

    Please suggest me a way to resolve this. Because of this error, my testing is not yet complete.

    opened by manu123416 4
  • Cannot use Qiskit simulation when running example1

    Cannot use Qiskit simulation when running example1

    I tried to runtorchquantum-master\artifact\example1\mnist_example.py, but I ran into some trouble when doing qiskit simulation, too.

    Because the flie "examples.core.datasets" is missing, I copied it from the file from https://zenodo.org/record/5787244#.YbunmBPMJhE.(\torchquantum-master\examples\core\datasets). To avoid BrokenPipeError, I reset the "num_workers" in line 152 to 0.

    After these messages:

    Test with Qiskit Simulator
    [2022-10-20 14:33:05.579] Before transpile: {'depth': 36, 'size': 77, 'width': 8, 'n_single_gates': 52, 'n_two_gates': 21, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 18, 'rx': 13, 'cx': 20, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
    [2022-10-20 14:33:06.257] After transpile: {'depth': 31, 'size': 61, 'width': 8, 'n_single_gates': 37, 'n_two_gates': 20, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 11, 'rz': 7, 'rx': 6, 'cx': 20, 'u3': 11, 'u1': 2, 'measure': 4}}
    

    an error occurred, saying "need at least one array to stack". the details are in the file "errorlog1"

    I also tried to add ", parallel=False" and modify the file qiskit/assembler/assemble_circuits.py as the Issue#9 does, but another error occurred, the details are in the file "errorlog2"

    The version information is as follow,

    >>> import qiskit
    >>> qiskit.version.QiskitVersion()
    {'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}
    

    and I'm running the code under python 3.9.

    By the way, I tried the code in https://zenodo.org/record/5787244#.YbunmBPMJhE.. The same problem occurred. I wonder how to deal with it. Any help would be greatly appreciated.

    errorlog1.txt errorlog2.txt

    opened by yzr-mint 2
  • Code for reproducing QuantumNAS results missing

    Code for reproducing QuantumNAS results missing

    The .py and .yml files referenced in the shell scripts used to reproduce the results from the QuantumNAS paper seem to be missing - when running the Colab notebooks, I always run into this error:

    can't open file 'examples/train.py': [Errno 2] No such file or directory

    I tried searching for the files in the repo manually, but could not find the .py or the .yml files anywhere.

    opened by SashwatAnagolum 2
  • GPU is not utilized during VQE training

    GPU is not utilized during VQE training

    I tried to use codes in VQE examples but found that GPU was not utilized. However, 2GB of the GPU memory is used.

    My configuration:

    [2022-05-31 13:54:52.758] /home/yuxuan/.julia/conda/3/envs/qtorch39/bin/python  examples/vqe/xxz_noncritical_configs.yml --gpu 0
    [2022-05-31 13:54:52.758] Training started: "runs/vqe.xxz_noncritical_configs".
    dataset:
      name: vqe
      input_name: input
      target_name: target
    trainer:
      name: params_shift_trainer
    run:
      steps_per_epoch: 10
      workers_per_gpu: 8
      n_epochs: 10
      bsz: 1
      device: gpu
    model:
      transpile_before_run: False
      load_op_list: False
      hamil_filename: examples/vqe/h2.txt
      arch:
        n_wires: 6
        n_layers_per_block: 6
        q_layer_name: seth_0
        n_blocks: 6
      name: vqe_0
    qiskit:
      use_qiskit: False
      use_qiskit_train: True
      use_qiskit_valid: True
      use_real_qc: False
      backend_name: ibmq_quito
      noise_model_name: None
      n_shots: 8192
      initial_layout: None
      optimization_level: 0
      max_jobs: 1
    ckpt:
      load_ckpt: False
      load_trainer: False
      name: checkpoints/min-loss-valid.pt
    debug:
      pdb: False
      set_seed: False
    optimizer:
      name: adam
      lr: 0.05
      weight_decay: 0.0001
      lambda_lr: 0.01
    criterion:
      name: minimize
    scheduler:
      name: cosine
    callbacks: [{'callback': 'InferenceRunner', 'split': 'valid', 'subcallbacks': [{'metrics': 'MinError', 'name': 'loss/valid'}]}, {'callback': 'MinSaver', 'name': 'loss/valid'}, {'callback': 'Saver', 'max_to_keep': 10}]
    regularization:
      unitary_loss: False
    legalization:
      legalize: False
    

    GPU status by Nvitop: (The last line is for VQE training process.)

    image

    Version information:

    python                    3.9.11               h12debd9_2
    tensorboard               2.9.0                    pypi_0    pypi
    tensorboard-data-server   0.6.1                    pypi_0    pypi
    tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
    tensorflow                2.9.1                    pypi_0    pypi
    tensorflow-estimator      2.9.0                    pypi_0    pypi
    tensorflow-io-gcs-filesystem 0.26.0                   pypi_0    pypi
    tensorpack                0.11                     pypi_0    pypi
    torch                     1.11.0+cu113             pypi_0    pypi
    torchaudio                0.11.0+cu113             pypi_0    pypi
    torchpack                 0.3.1                    pypi_0    pypi
    torchquantum              0.1.0                     dev_0    <develop>
    torchvision               0.12.0+cu113             pypi_0    pypi
    
    opened by royess 2
  • inconvenient to run VQE example

    inconvenient to run VQE example

    I find it not very convenient to run VQE example. If I run python examples/vqe/train.py directly, I will get an error message

    ......
        from examples.vqe import builder
    ModuleNotFoundError: No module named 'examples.vqe'
    

    But python -c "from examples.vqe import builder" works fine, which is strange to me.

    And my current way to run the script is by opening a python REPL and running

    from examples.vqe import train
    import sys
    sys.argv.append('examples/vqe/vqe_configs.yml')
    train.main()
    

    I wonder whether I can do it in a simpler way. Or there is a need to modify codes.

    opened by royess 2
  • Got stuck while running .\artifact\example2

    Got stuck while running .\artifact\example2

    I tried to runtorchquantum-master\artifact\example2\quantumnas\1_train_supercircuit.sh, but I got stuck.

    The program seems stuck after it begins to train. After the message "0% 0/92 [00:00<?, ?it/s]" came out, I waited for hours but nothing happened. The output is in the file"errorlog.log".

    The version information is as follow,

    >>> import qiskit
    >>> qiskit.version.QiskitVersion()
    {'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}
    

    and I'm running the code under python 3.9.

    I wonder how to deal with it. Any help would be greatly appreciated.

    errorlog.log

    opened by yzr-mint 1
  • Apple Silicon Mac needs one more step: install hdf5

    Apple Silicon Mac needs one more step: install hdf5

    git clone https://github.com/mit-han-lab/torchquantum.git
    cd torchquantum
    brew install hdf5
    export HDF5_DIR="$(brew --prefix hdf5)"
    pip install --editable .
    python fix_qiskit_parameterization.py
    

    reference: https://stackoverflow.com/questions/66741778/how-to-install-h5py-needed-for-keras-on-macos-with-m1

    opened by frogcjn 1
  • (feat/github-ci) ensure python style consistency, add pre-commit hooks

    (feat/github-ci) ensure python style consistency, add pre-commit hooks

    Hello 👋

    This small PR adds new GH CI workflows, flake8 configuration to the torchquantum project to ensure Python style consistency, prevents codebase from common mistakes.

    opened by almostagi 1
  • How to save the QNN model like a normal pytorch model?

    How to save the QNN model like a normal pytorch model?

    Hi,

    How can I save the QNN model in such a way that it can be loaded back in the same way we load a normal pytorch model. Basically, I want to load it for this use case.

    I did check the saving example from the examples section, but it doesn't save the entire model but just a checkpoint.

    opened by sauravtii 1
  • A simple way to improve the regression example

    A simple way to improve the regression example

    https://github.com/mit-han-lab/torchquantum/blob/7122388a3d58d5b6c48db44fbd4b27198941ed2f/examples/regression/run_regression.py#L138

    In this example, after the measurement, you get 3 numbers (output_all.shape = [bsz, 3]). However, in the loss function, only the 2nd number is utilized, i.e., [:, 1]. This leads to poor performance. A simple fix can significantly improve the performance (I already tested).

    1. Add res = self.linear(res) to the last step of the self.forward() function, where self.linear = nn.torch.Linear(self.n_wires, 1)
    2. The targets need to unsqueeze(dim=-1) so the dimension of outputs_all and targets match

    BTW: I have been playing around with torchquantum recently. It is a very good tool.

    opened by caitaozhan 0
  • Support for fake backends

    Support for fake backends

    First of all, I appreciate your effort! This framework is so helpful for new learners!

    I think it would be great if this framework supports fake backends as well for reproducibility!

    Thank you.

    opened by j0807s 1
  • Train data label and image are different

    Train data label and image are different

    Hi , I tried testing your quantum neural network code on jupyter notebook. I think, there is some bug in the training data.

    dataset = MNIST(root='../Data_Manu',
                     train_valid_split_ratio=[0.9, 0.1],
                digits_of_interest=[3, 5],
    #             n_train_samples = 75,
                n_test_samples=75)
    

    data_train= dataset['train'][0]

    {'image': tensor([[[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.3733, -0.1696,
               -0.1696,  0.8868,  1.4468,  2.1087,  0.0213, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242,  0.1995,  0.6704,  1.8032,  1.9560,  2.7960,
                2.8088,  2.7960,  2.7960,  2.7960,  2.4142, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                1.6250,  2.1723,  1.3068,  1.3068,  1.3196,  1.3068,  1.3068,
                1.4978,  2.5542,  2.7069,  2.7960,  2.7960,  2.7960,  2.7960,
                2.8088,  2.7960,  2.7960,  2.3251,  1.8160, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                2.2996,  2.7960,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
                2.7960,  2.7960,  2.8088,  2.7960,  2.7960,  2.7960,  2.7960,
                2.0323,  0.9886,  0.3140, -0.3606, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                0.7977,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
                2.8088,  2.8088,  2.6433,  1.9560,  0.8232,  0.6322, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                0.5049,  2.7960,  2.7960,  2.7960,  2.8088,  2.4778,  1.2941,
                0.6322,  0.0722, -0.0424, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.4795,
                2.6433,  2.7960,  2.7960,  2.7960,  1.0523, -0.2715, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.2360,
                2.7960,  2.7960,  2.5160,  0.5813, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.8088,
                2.7960,  2.7960,  2.4906,  0.8232,  0.8359,  0.8232,  0.8232,
                0.8232, -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.0451,
                2.8088,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
                2.8088,  2.8088,  2.0451,  0.1740, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.1740,
                1.8796,  2.5415,  2.5415,  2.6433,  2.5542,  2.5415,  2.5415,
                2.5415,  2.6433,  2.8088,  2.3760, -0.2460, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.0424, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.0424,  2.4142,  2.7960,  2.1087, -0.3478, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.2206,  2.2360,  2.7960,  2.7960, -0.1824, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.0296,
                1.4850,  2.3378,  2.8088,  2.7960,  2.7960,  0.3013, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242,  0.6195,  1.6505,  2.8088,
                2.8088,  2.8088,  2.8215,  2.8088,  1.9051, -0.3224, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.3351,  0.5049,  2.0196,  2.8088,  2.7960,  2.7960,
                2.7960,  2.7960,  2.4524,  1.2050, -0.2715, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.3351,  1.7269,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
                2.6306,  1.7905,  0.3395, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.3097,  1.4341,  2.2869,  2.2869,  2.2996,  1.7141,  1.0650,
               -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242]]]),
     'digit': 1}
    
    The tensor matrix contains 5 but the label shows 1?
    
    opened by manu123416 3
  • Density matrix and mixed state

    Density matrix and mixed state

    Hi,

    we are currently using Torchquantum to implement hybrid models, and we're wondering does Torchquantum plan to support mixed states and density matrix simulation in the near future since we'd like to implement e.g. something like qiskit.quantum_info.partial_trace?

    Without density matrix/mixed states, is something like https://quantumai.google/reference/python/cirq/partial_trace_of_state_vector_as_mixture currently doable with Torchquantum?

    Thanks for making such an awesome library available!

    opened by wcqc 6
Releases(v0.1.5)
  • v0.1.2(Sep 14, 2022)

    1. Add support for state.gate such as state.h
    2. Add more examples

    What's Changed

    • RZ gate by @jessding in https://github.com/mit-han-lab/torchquantum/pull/1
    • [major] merge pruning branch to master by @Hanrui-Wang in https://github.com/mit-han-lab/torchquantum/pull/2
    • Jiaqi by @JeremieMelo in https://github.com/mit-han-lab/torchquantum/pull/3
    • Jiaqi by @JeremieMelo in https://github.com/mit-han-lab/torchquantum/pull/4
    • Params shift by @abclzr in https://github.com/mit-han-lab/torchquantum/pull/6
    • Corrected module name to import MNIST by @googlercolin in https://github.com/mit-han-lab/torchquantum/pull/23
    • modify doc conf to init docstring publish task by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/24
    • refine class template by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/25
    • [minor] update format and theme by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/26
    • [minor] adjust dark theme code block and add function template by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/27
    • add customized furo doc theme by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/28
    • [doc] add ipynb and md support into doc, add one example by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/29
    • [major] fix bugs in torchquantum/measure.py by @abclzr in https://github.com/mit-han-lab/torchquantum/pull/30
    • [doc] Fix examples page in examples/index.rst by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/31

    New Contributors

    • @jessding made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/1
    • @Hanrui-Wang made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/2
    • @JeremieMelo made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/3
    • @abclzr made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/6
    • @googlercolin made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/23
    • @frogcjn made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/24

    Full Changelog: https://github.com/mit-han-lab/torchquantum/commits/v0.1.2

    Source code(tar.gz)
    Source code(zip)
    torchquantum-0.1.2-py3-none-any.whl(107.24 KB)
    torchquantum-0.1.2.tar.gz(93.23 KB)
Owner
MIT HAN Lab
Accelerating Deep Learning Computing
MIT HAN Lab
A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

Mika 251 Dec 08, 2022
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022
Code for a seq2seq architecture with Bahdanau attention designed to map stereotactic EEG data from human brains to spectrograms, using the PyTorch Lightning.

stereoEEG2speech We provide code for a seq2seq architecture with Bahdanau attention designed to map stereotactic EEG data from human brains to spectro

15 Nov 11, 2022
PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).

PyGAD: Genetic Algorithm in Python PyGAD is an open-source easy-to-use Python 3 library for building the genetic algorithm and optimizing machine lear

Ahmed Gad 1.1k Dec 26, 2022
An investigation project for SISR.

SISR-Survey An investigation project for SISR. This repository is an official project of the paper "From Beginner to Master: A Survey for Deep Learnin

Juncheng Li 79 Oct 20, 2022
Convolutional Neural Network for 3D meshes in PyTorch

MeshCNN in PyTorch SIGGRAPH 2019 [Paper] [Project Page] MeshCNN is a general-purpose deep neural network for 3D triangular meshes, which can be used f

Rana Hanocka 1.4k Jan 04, 2023
From a body shape, infer the anatomic skeleton.

OSSO: Obtaining Skeletal Shape from Outside (CVPR 2022) This repository contains the official implementation of the skeleton inference from: OSSO: Obt

Marilyn Keller 166 Dec 28, 2022
Yolov5-opencv-cpp-python - Example of using ultralytics YOLO V5 with OpenCV 4.5.4, C++ and Python

yolov5-opencv-cpp-python Example of performing inference with ultralytics YOLO V

183 Jan 09, 2023
Multi-Content GAN for Few-Shot Font Style Transfer at CVPR 2018

MC-GAN in PyTorch This is the implementation of the Multi-Content GAN for Few-Shot Font Style Transfer. The code was written by Samaneh Azadi. If you

Samaneh Azadi 422 Dec 04, 2022
Label-Free Model Evaluation with Semi-Structured Dataset Representations

Label-Free Model Evaluation with Semi-Structured Dataset Representations Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch

8 Oct 06, 2022
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

PlaneTR: Structure-Guided Transformers for 3D Plane Recovery This is the official implementation of our ICCV 2021 paper News There maybe some bugs in

73 Nov 30, 2022
Task-related Saliency Network For Few-shot learning

Task-related Saliency Network For Few-shot learning This is an official implementation in Tensorflow of TRSN. Abstract An essential cue of human wisdo

1 Nov 18, 2021
This repo implements a 3D segmentation task for an airport baggage dataset.

3D CT Scan Segmentation With Occupancy Network This repo implements a 3D superresolution segmentation task for an airport baggage dataset. Our final p

Christoph Reich 2 Mar 28, 2022
We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances to transformations applied to both the audio and video streams.

Multi-Modal Self-Supervision using GDT and StiCa This is an official pytorch implementation of papers: Multi-modal Self-Supervision from Generalized D

Facebook Research 42 Dec 09, 2022
STMTrack: Template-free Visual Tracking with Space-time Memory Networks

STMTrack This is the official implementation of the paper: STMTrack: Template-free Visual Tracking with Space-time Memory Networks. Setup Prepare Anac

Zhihong Fu 62 Dec 21, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 08, 2022
A simple program for training and testing vit

Vit This is a simple program for training and testing vit. Key requirements: torch, torchvision and timm. Dataset I put 5 categories of the cub classi

xiezhenyu 2 Oct 11, 2022
Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation

TimeLens: Event-based Video Frame Interpolation This repository is about the High Speed Event and RGB (HS-ERGB) dataset, used in the 2021 CVPR paper T

Robotics and Perception Group 544 Dec 19, 2022
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

AI2 111 Dec 18, 2022