Finetune alexnet with tensorflow - Code for finetuning AlexNet in TensorFlow >= 1.2rc0

Overview

Finetune AlexNet with Tensorflow

Update 15.06.2016

I revised the entire code base to work with the new input pipeline coming with TensorFlow >= version 1.2rc0. You can find an explanation of the new input pipeline in a new blog post You can use this code as before for finetuning AlexNet on your own dataset, only the dependency of OpenCV isn't necessary anymore. The old code can be found in this past commit.

This repository contains all the code needed to finetune AlexNet on any arbitrary dataset. Beside the comments in the code itself, I also wrote an article which you can find here with further explanation.

All you need are the pretrained weights, which you can find here or convert yourself from the caffe library using caffe-to-tensorflow. If you convert them on your own, take a look on the structure of the .npy weights file (dict of dicts or dict of lists).

Note: I won't write to much of an explanation here, as I already wrote a long article about the entire code on my blog.

Requirements

  • Python 3
  • TensorFlow >= 1.2rc0
  • Numpy

TensorBoard support

The code has TensorFlows summaries implemented so that you can follow the training progress in TensorBoard. (--logdir in the config section of finetune.py)

Content

  • alexnet.py: Class with the graph definition of the AlexNet.
  • finetune.py: Script to run the finetuning process.
  • datagenerator.py: Contains a wrapper class for the new input pipeline.
  • caffe_classes.py: List of the 1000 class names of ImageNet (copied from here).
  • validate_alexnet_on_imagenet.ipynb: Notebook to test the correct implementation of AlexNet and the pretrained weights on some images from the ImageNet database.
  • images/*: contains three example images, needed for the notebook.

Usage

All you need to touch is the finetune.py, although I strongly recommend to take a look at the entire code of this repository. In the finetune.py script you will find a section of configuration settings you have to adapt on your problem. If you do not want to touch the code any further than necessary you have to provide two .txt files to the script (train.txt and val.txt). Each of them list the complete path to your train/val images together with the class number in the following structure.

Example train.txt:
/path/to/train/image1.png 0
/path/to/train/image2.png 1
/path/to/train/image3.png 2
/path/to/train/image4.png 0
.
.

were the first column is the path and the second the class label.

The other option is that you bring your own method of loading images and providing batches of images and labels, but then you have to adapt the code on a few lines.

Comments
  • Error in notebook:

    Error in notebook: "ValueError: Variable conv1/weights already exists, disallowed. Did you mean to set reuse=True in VarScope"

    Hi, I am having some trouble running the third cell of the notebook. Error is:

    ValueErrorTraceback (most recent call last)
    <ipython-input-7-f7a1b7dd0c14> in <module>()
          7 
          8 #create model with default config ( == no skip_layer and 1000 units in the last layer)
    ----> 9 model = AlexNet(x, keep_prob, 1000, [])
         10 
         11 #define activation of last layer as score
    
    /mnt/ilcompf6d0/user/txiao/DockerFiles/finetune_alexnet_with_tensorflow/alexnet.py in __init__(self, x, keep_prob, num_classes, skip_layer, weights_path)
         39 
         40     # Call the create function to build the computational graph of AlexNet
    ---> 41     self.create()
         42 
         43   def create(self):
    
    /mnt/ilcompf6d0/user/txiao/DockerFiles/finetune_alexnet_with_tensorflow/alexnet.py in create(self)
         44 
         45     # 1st Layer: Conv (w ReLu) -> Pool -> Lrn
    ---> 46     conv1 = conv(self.X, 11, 11, 96, 4, 4, padding = 'VALID', name = 'conv1')
         47     pool1 = max_pool(conv1, 3, 3, 2, 2, padding = 'VALID', name = 'pool1')
         48     norm1 = lrn(pool1, 2, 2e-05, 0.75, name = 'norm1')
    
    /mnt/ilcompf6d0/user/txiao/DockerFiles/finetune_alexnet_with_tensorflow/alexnet.py in conv(x, filter_height, filter_width, num_filters, stride_y, stride_x, name, padding, groups)
        131   with tf.variable_scope(name) as scope:
        132     # Create tf variables for the weights and biases of the conv layer
    --> 133     weights = tf.get_variable('weights', shape = [filter_height, filter_width, input_channels/groups, num_filters])
        134     biases = tf.get_variable('biases', shape = [num_filters])
        135 
    
    /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
       1047       collections=collections, caching_device=caching_device,
       1048       partitioner=partitioner, validate_shape=validate_shape,
    -> 1049       use_resource=use_resource, custom_getter=custom_getter)
       1050 get_variable_or_local_docstring = (
       1051     """%s
    
    /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
        946           collections=collections, caching_device=caching_device,
        947           partitioner=partitioner, validate_shape=validate_shape,
    --> 948           use_resource=use_resource, custom_getter=custom_getter)
        949 
        950   def _get_partitioned_variable(self,
    
    /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
        354           reuse=reuse, trainable=trainable, collections=collections,
        355           caching_device=caching_device, partitioner=partitioner,
    --> 356           validate_shape=validate_shape, use_resource=use_resource)
        357 
        358   def _get_partitioned_variable(
    
    /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource)
        339           trainable=trainable, collections=collections,
        340           caching_device=caching_device, validate_shape=validate_shape,
    --> 341           use_resource=use_resource)
        342 
        343     if custom_getter is not None:
    
    /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource)
        651                          " Did you mean to set reuse=True in VarScope? "
        652                          "Originally defined at:\n\n%s" % (
    --> 653                              name, "".join(traceback.format_list(tb))))
        654       found_var = self._vars[name]
        655       if not shape.is_compatible_with(found_var.get_shape()):
    
    ValueError: Variable conv1/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
    
      File "alexnet.py", line 133, in conv
        weights = tf.get_variable('weights', shape = [filter_height, filter_width, input_channels/groups, num_filters])
      File "alexnet.py", line 46, in create
        conv1 = conv(self.X, 11, 11, 96, 4, 4, padding = 'VALID', name = 'conv1')
      File "alexnet.py", line 41, in __init__
        self.create()
    
    

    This error was in both the May 22 commit (TF 1.0) and the latest commit, TF 1.12rc0. I am on TF 1.1.0.

    Cheers!

    opened by txizzle 29
  • I cannot see anything in tensorboard

    I cannot see anything in tensorboard

    I have successfully trained alexnet for dogs vs cats dataset but when I run tensorboard I can only visualize this page:

    tensorboard

    The rest of the pages are inactive:

    tensorboard2

    I have tensorflow 1.3.0, and python 3.6.3 and I am using windows 10. Do you have any idea what can cause the problem?

    opened by OctaM 18
  • Nan in summary histogram for: fc8/biases_0

    Nan in summary histogram for: fc8/biases_0

    I run fintune.py,but I get some error.Could someone plz tell me what's the problem and how to modify the code to solve it? Thank you

    the error is flowing: tensorflow.python.framework.errors_impl.InvalidArgumentError: Nan in summary histogram for: fc8/biases_0

    opened by yz21606948 12
  • tensorflow.python.framework.errors_impl.InvalidArgumentError: Input shape axis 0 must equal 4, got shape [5]

    tensorflow.python.framework.errors_impl.InvalidArgumentError: Input shape axis 0 must equal 4, got shape [5]

    hi, thanks for your sharing. I am using your code to finetune alexnet to classify images into blur or clear. When I run finetune.py with my 'train.txt' and 'val.txt', after dozens of train_batches, I get this error :-1: tensorflow.python.framework.errors_impl.InvalidArgumentError: Input shape axis 0 must equal 4, got shape [5] [[Node: unstack_1 = UnpackT=DT_INT32, axis=0, num=4]] [[Node: IteratorGetNext = IteratorGetNextoutput_shapes=[[?,227,227,3], [?,2]], output_types=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    I use python3.4, TF1.4.0

    opened by xiexie123 11
  • Classifying with Checkpoint

    Classifying with Checkpoint

    Hey, so I finished fine tuning my model, but I haven't really used tensorflow that much before. I understand how to load in the images I want to test on, but how do I use the checkpoint file to classify them?

    Also thoughts on parameters for this dataset; 330 training images of Benign and Malignant, along with a validation and testing set of 40 images. I changed learning rate to 0.005 as well as batch size to 40, any other thoughts? I'm still training the last two layers, I'm not sure whether this is the right move. Thanks!!

    opened by Colhodm 10
  • Accuracy not the same as shown in the Post

    Accuracy not the same as shown in the Post

    Hello,

    I tried to use the same code as posted for the AlexNet network and then used the validation for the same. I have tried the same code as in the post and am getting a very bad accuracy for some reason.

    ('Class name:', 'zebra', ' and Probability ', 0.6496317) ('Class name:', 'sea lion', ' and Probability ', 0.34226722) ('Class name:', 'llama', ' and Probability ', 0.36589968)

    I used python and not Jupyter notebook for the validation so I printed the accuracy in the command line. I'm not sure why I have a different value. Can anyone give me some on this?

    opened by gkrish19 8
  • Nan in summary histogram for: fc8/weights_0

    Nan in summary histogram for: fc8/weights_0

    hi!It's very kind of you for sharing the repository.I used it to train the Dogs vs.Cats datasets and it worked.But when I used the ILSVRC2012 dataset to fine-tune the structure, I found some errors:

    Traceback (most recent call last): File "/home/chengy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call return fn(*args) File "/home/chengy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/chengy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Nan in summary histogram for: fc8/weights_0 [[Node: fc8/weights_0 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](fc8/weights_0/tag, fc8/weights/read/_41)]]

    I've tried to replace the loss function to the following: loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=score, labels=tf.clip_by_value(y,1e-10,1.0))) and changed the learning rate to a very low number as 1e-4.But these operations did not make it better. I just use 20 categories of the whole 1 000 categories of Imagenet to fine-tune the model.Does anyone have met the same problem or have solutions to fix it?Thanks in advance.

    opened by ChengYeung1222 7
  • New DataGenerator gets worse accuracy than old DataGenerator?

    New DataGenerator gets worse accuracy than old DataGenerator?

    Using the new ImageDataGenerator (June 15 update), I get lower accuracy than the old ImageDataGenerator (using cv2).

    I have changed the new ImageDataGenerator very slightly (tf.image.decode_png => tf.image.decode_jpeg, VGG_MEAN => IMAGENET_MEAN), but still get significant difference in accuracy. To benchmark the two datagenerators, I am loading in the BVLC AlexNet weights and validating on the ImageNet validation set without shuffle.

    With the old cv2 ImageDataGenerator, I get 55.72% Top-1 Accuracy and 79.08% Top-5 Accuracy. With the new TF ImageDataGenerator, I get 48.58% Top-1 Accuracy and 73.21% Top-5 Accuracy.

    I suspect this has to do with how TF loads in images as opposed to CV2. Here is what the two different image processing steps look like:

    Old CV2 ImageDataGenerator:

    def next_batch(self, batch_size):
            """
            This function gets the next n ( = batch_size) images from the path list
            and labels and loads the images into them into memory 
            """
            # Get next batch of image (path) and labels
            paths = self.images[self.pointer:self.pointer + batch_size]
            labels = self.labels[self.pointer:self.pointer + batch_size]
            
            #update pointer
            self.pointer += batch_size
            
            # Read images
            images = np.ndarray([batch_size, self.scale_size[0], self.scale_size[1], 3])
            for i in range(len(paths)):
                img = cv2.imread(paths[i])
                
                #flip image at random if flag is selected
                if self.horizontal_flip and np.random.random() < 0.5:
                    img = cv2.flip(img, 1)
                
                #rescale image
                img = cv2.resize(img, (self.scale_size[0], self.scale_size[1]))
                img = img.astype(np.float32)
                
                #subtract mean, which is np.array([104., 117., 124.])
                img -= self.mean
                                                                     
                images[i] = img
    
            # Expand labels to one hot encoding
            one_hot_labels = np.zeros((batch_size, self.n_classes))
            for i in range(len(labels)):
                one_hot_labels[i][labels[i]] = 1
    
            #return array of images and labels
            return images, one_hot_labels
    
    

    New TF ImageDataGenerator:

    def _parse_function_inference(self, filename, label):
            """Input parser for samples of the validation/test set."""
            # convert label number into one-hot-encoding
            one_hot = tf.one_hot(label, self.num_classes)
    
            # load and preprocess the image
            img_string = tf.read_file(filename)
            img_decoded = tf.image.decode_jpeg(img_string, channels=3)
            img_resized = tf.image.resize_images(img_decoded, [227, 227])
            
            IMAGENET_MEAN = tf.constant([104., 117., 124.], dtype=tf.float32)
            img_float = tf.to_float(img_resized)
            # RGB -> BGR
            img_bgr = img_float[:, :, ::-1]
            img_centered = tf.subtract(img_bgr, IMAGENET_MEAN)
    
            return img_centered, one_hot
    

    I believe I am using the CV2 Datagenerator correctly as I can successfully finetune and call train_generator.next_batch(batch_size) within the training and validation loops with no issue. I built off of the finetune.py file you provided for the new TF Datagenerator (I use cpu device cpu:0 and then run the datagenerator init ops), and then get batches within the training/validation loops with img_batch, label_batch = sess.run(next_batch).

    Any advice? Thanks a lot for providing and maintaining this proejct. Besides for this issue, it was super easy to work with and modify!

    opened by txizzle 7
  • ZeroDivisionError: float division by zero

    ZeroDivisionError: float division by zero

    Hi, I tried reproducing the example keeping it as simple as possible. I have train.txt as: images/cat1.png 0 images/cat2.png 0 images/cat3.png 0 images/dog1.png 1 images/dog2.png 1 images/dog3.png 1 And test.txt as: images/cat4.png 0 images/dog4.png 1 when i run the finetune.py i get this error: Traceback (most recent call last): File "finetune.py", line 163, in test_acc /= test_count ZeroDivisionError: float division by zero Tried debugging the error and found that val_batches_per_epoch is 0 and the inner body isn't executing.

    opened by disdaining 7
  • train_layers only works for fc7 and fc8

    train_layers only works for fc7 and fc8

    Thanks for your contribution! when i reuse your code, and i want to train fc6, fc7,fc8 , but the result: Cross entropy = nan if only train fc7,fc8 , it works well. also i have test earlier layers ,, like conv1,conv2.., it still appear nan do have have any suggestion to help me fix this?

    opened by jasstionzyf 6
  • Image Rescale Error

    Image Rescale Error

    I get the following error from datagenerator.py: error: (-215) ssize.width > 0 && ssize.height > 0 in function resize

    finetune_alexnet_with_tensorflow-master/datagenerator.py", line 150, in next_batch img = cv2.resize(img, (self.scale_size[0], self.scale_size[0]))

    I debugged code as follows:

    			img = cv2.imread(paths[i])
    			print(img);
    			if img is None:
    				print (paths[i] + " : fail to read")
    			else:
    				print("Image is read");
    			exit();
    

    Error suggests that image is not read while in a separate program i verified that image is read:

    import numpy as np
    import cv2
    
    img = cv2.imread('images/ld2.png')
    cv2.imwrite('messigray.png',img) 
    

    Followed this solution but couldn't work.

    opened by disdaining 6
  • ValueError: Dimension size must be evenly divisible by 2 but is 1

    ValueError: Dimension size must be evenly divisible by 2 but is 1

    While I test the scrip of fintune.py on TensorFlow 1.5, the system throws the errorn as follows. I list the information inlucuding ValueError, possible questionable code and traceback as follows. Appreciate your help in advance.

    1. General Message

    ValueError: Dimension size must be evenly divisible by 2 but is 1 Number of ways to split should evenly divide the split dimension for 'split_1' (op: 'Split') with input shapes: [], [5,5,2,1] and with computed input

    2. Code

    It targets to the line of code as follows.

    alexnet.py

    weight_groups = tf.split(value=weights, num_or_size_splits=group, axis=3) 
    

    3. Detailed Error Message:

    $ python finetune.py --conv=4 --dropout_rate=0.03

    Traceback (most recent call last): File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1607, in _create_c_op c_op = c_api.TF_FinishOperation(op_desc) tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension size must be evenly divisible by 2 but is 1 Number of ways to split should evenly divide the split dimension for 'split_1' (op: 'Split') with input shapes: [], [5,5,2,1] and with computed input tensors: input[0] = <3>.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "finetune.py", line 97, in model = AlexNet(x, keep_prob, num_classes, train_layers) File "/home/nano/Documents/finetune_alexnet_with_tf/alexnet.py", line 56, in init self.create() File "/home/nano/Documents/finetune_alexnet_with_tf/alexnet.py", line 69, in create conv2 = conv(pool1, 5, 5, 256, 1, 1, name='conv2', group=2) File "/home/nano/Documents/finetune_alexnet_with_tf/alexnet.py", line 159, in conv weight_groups = tf.split(value=weights, num_or_size_splits=group, axis=3) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/ops/array_ops.py", line 1684, in split axis=axis, num_split=num_or_size_splits, value=value, name=name) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 9898, in split "Split", split_dim=axis, value=value, num_split=num_split, name=name) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op attrs, op_def, compute_device) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal op_def=op_def) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1770, in init control_input_ops) File "/home/nano/.virtualenvs/win/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1610, in _create_c_op raise ValueError(str(e)) ValueError: Dimension size must be evenly divisible by 2 but is 1 Number of ways to split should evenly divide the split dimension for 'split_1' (op: 'Split') with input shapes: [], [5,5,2,1] and with computed input

    opened by mikechen66 0
  • ValueError: The initial value's shape (()) is not compatible with the explicitly supplied `shape` argument ([11, 11, 3, 96]).

    ValueError: The initial value's shape (()) is not compatible with the explicitly supplied `shape` argument ([11, 11, 3, 96]).

    I run the script of finetune.py in both TensorFlow 1.5 and TensorFlow 2.1. After solving many issues, I found out the shape incompatibility issue in the script of alexnet.py. Please help fix issue at your convenience. Appreciate your help in advance.

    I get to know that there is the scope conflict between the shape and the conv argument. tf.variable_scope() usually defines global variables in the with context. It influences other related variables. For instance, shape= [filter_height, filter_width, input_channels//groups, num_filters], it denotes [11,11,3,96] in the Conv1; in contrast, Conv1 includes the arguments: 11, 11, 96, 4, 4.

    1. Error Message

    ValueError: The initial value's shape (()) is not compatible with the explicitly supplied shape argument ([11, 11, 3, 96]).

    2. Attempted Changes

    I tried to make the following changes.

    1). Change the order of either the shape or the Conv1 arguments, for instance, shape=[11,11,96,3] or conv(self.X, 11, 11, 4, 4, 96, name='conv1', padding='VALID')

    2). Change the name of shape

    kernel_shape= [filter_height, filter_width, input_channels//groups, num_filters]
    

    3). Delete the shape and keep the argument.

    [filter_height, filter_width, input_channels//groups, num_filters]
    

    However, the following error varieties have still been persisted.

    ValueError: The initial value's shape (()) is not compatible with the explicitly supplied shape argument ([11, 11, 3, 4]).

    ValueError: Shapes must be equal rank, but are 4 and 0 for 'conv1/Variable/Assign' (op: 'Assign') with input shapes: [11,11,3,4], [].

    ValueError: Shapes must be equal rank, but are 4 and 0 for 'conv1/Variable/Assign' (op: 'Assign') with input shapes: [11,11,3,96], [].

    It is definitely the critical issue of "shape" in the second snippet. But I have not yet figured a way to solve the issues.

    3. Snippets

    1st snippet.

    class AlexNet(object):
        .........
        def create(self):
            """Create the network graph."""
            # 1st Layer: Conv (w ReLu) -> Lrn -> Pool
            conv1 = conv(self.X, 11, 11, 4, 4, 96, name='conv1', padding='VALID')
            norm1 = lrn(conv1, 2, 2e-05, 0.75, name='norm1')
            pool1 = max_pool(norm1, 3, 3, 2, 2, name='pool1', padding='VALID')
    

    2nd snippet:

    def conv(x, filter_height, filter_width, stride_y, stride_x, num_filters, name,
        padding='SAME', groups=1):
        .........
        with tf.compat.v1.variable_scope(name) as scope:
            weights = tf.Variable('weights', shape=[filter_height, 
                                                                          filter_width, 
                                                                          input_channels//groups,
                                                                          num_filters])
            biases = tf.Variable('biases', shape=[num_filters])
    

    4. Detailed error message

    $ python finetune.py

    Traceback (most recent call last): File "finetune.py", line 91, in model = AlexNet(x, keep_prob, num_classes, train_layers) File "/home/mike/Documents/finetune_alexnet_with_tf/alexnet.py", line 56, in init self.create() File "/home/mike/Documents/finetune_alexnet_with_tf/alexnet.py", line 61, in create conv1 = conv(self.X, 11, 11, 96, 4, 4, padding='VALID', name='conv1') File "/home/mike/Documents/finetune_alexnet_with_tf/alexnet.py", line 147, in conv num_filters]) File "/home/mike/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/variables.py", line 260, in call return cls._variable_v2_call(*args, **kwargs) File "/home/mike/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/variables.py", line 254, in _variable_v2_call shape=shape) File "/home/mike/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/variables.py", line 235, in previous_getter = lambda **kws: default_variable_creator_v2(None, **kws) File "/home/mike/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/variable_scope.py", line 2645, in default_variable_creator_v2 shape=shape) File "/home/mike/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/variables.py", line 262, in call return super(VariableMetaclass, cls).call(*args, **kwargs) File "/home/mike/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py", line 1411, in init distribute_strategy=distribute_strategy) File "/home/mike/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py", line 1549, in _init_from_args (initial_value.shape, shape)) ValueError: The initial value's shape (()) is not compatible with the explicitly supplied shape argument ([11, 11, 3, 96]).

    opened by mikechen66 0
  • How to improve accuracy

    How to improve accuracy

    2019-05-28 10:45:45.833285 Validation Accuracy = 0.2188 2019-05-28 10:45:45.833390 Saving checkpoint of model... 2019-05-28 10:45:47.135384 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:45:47.464044 Start validation 2019-05-28 10:45:50.295000 Validation Accuracy = 0.2188 2019-05-28 10:45:50.295116 Saving checkpoint of model... 2019-05-28 10:45:51.710815 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:45:52.035434 Start validation 2019-05-28 10:45:54.884074 Validation Accuracy = 0.2188 2019-05-28 10:45:54.884180 Saving checkpoint of model... 2019-05-28 10:45:56.231970 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:45:56.560833 Start validation 2019-05-28 10:45:59.407273 Validation Accuracy = 0.2188 2019-05-28 10:45:59.407380 Saving checkpoint of model... 2019-05-28 10:46:01.239952 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:46:01.566239 Start validation 2019-05-28 10:46:04.391418 Validation Accuracy = 0.2188 2019-05-28 10:46:04.391538 Saving checkpoint of model... 2019-05-28 10:46:05.695854 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:46:06.340697 Start validation 2019-05-28 10:46:09.281800 Validation Accuracy = 0.2188 2019-05-28 10:46:09.281909 Saving checkpoint of model... 2019-05-28 10:46:10.555316 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:46:10.904200 Start validation 2019-05-28 10:46:13.926193 Validation Accuracy = 0.2188 2019-05-28 10:46:13.926291 Saving checkpoint of model...

    As you see,accuracy is 0.2188 and not change.What can I do to fit this condition

    opened by Julius-ZCJ 1
  • data pre-processing before finetuning, divide by 255?

    data pre-processing before finetuning, divide by 255?

    Hello, I use python 3.6 + tensorflow 1.9.0 to do a classification task by finetuning Alexnet. In kratzert's srouce code, the range of data is [0,255] and the image is substracted by meanfile[103,116,123]. However, in my case, I perform better accrucy when the input images divide by 255 after substracted mean(distribution is [-0.5,0.5]). If the input images did not divide by 255 (distrubution is [-128, 128]), the results is bad and more easily to suffer Nan problem in summary operation when the learning rate is higher then 0.001. The question is should I scale the image ?

    opened by qiyang77 2
Releases(v0.1.1)
CVPR 2021: "The Spatially-Correlative Loss for Various Image Translation Tasks"

Spatially-Correlative Loss arXiv | website We provide the Pytorch implementation of "The Spatially-Correlative Loss for Various Image Translation Task

Chuanxia Zheng 89 Jan 04, 2023
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

411 Dec 16, 2022
PyTorch implementation for the paper Visual Representation Learning with Self-Supervised Attention for Low-Label High-Data Regime

Visual Representation Learning with Self-Supervised Attention for Low-Label High-Data Regime Created by Prarthana Bhattacharyya. Disclaimer: This is n

Prarthana Bhattacharyya 5 Nov 08, 2022
Dataset and Code for the paper "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021), and "Depth-only Object Tracking" (BMVC2021)

DeT and DOT Code and datasets for "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021) "Depth-only Object Tracking" (BMVC2021) @InProceedings

Yan Song 55 Dec 15, 2022
Torch-ngp - A pytorch implementation of the hash encoder proposed in instant-ngp

HashGrid Encoder (WIP) A pytorch implementation of the HashGrid Encoder from ins

hawkey 1k Jan 01, 2023
Implementation of "Selection via Proxy: Efficient Data Selection for Deep Learning" from ICLR 2020.

Selection via Proxy: Efficient Data Selection for Deep Learning This repository contains a refactored implementation of "Selection via Proxy: Efficien

Stanford Future Data Systems 70 Nov 16, 2022
A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.

DrQA A pytorch implementation of the ACL 2017 paper Reading Wikipedia to Answer Open-Domain Questions (DrQA). Reading comprehension is a task to produ

Runqi Yang 394 Nov 08, 2022
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
Boundary-aware Transformers for Skin Lesion Segmentation

Boundary-aware Transformers for Skin Lesion Segmentation Introduction This is an official release of the paper Boundary-aware Transformers for Skin Le

Jiacheng Wang 79 Dec 16, 2022
This repository contains demos I made with the Transformers library by HuggingFace.

Transformers-Tutorials Hi there! This repository contains demos I made with the Transformers library by 🤗 HuggingFace. Currently, all of them are imp

3.5k Jan 01, 2023
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

[NeurIPS 2021 Spotlight] HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning [Paper] This is Official PyTorch implementatio

42 Nov 01, 2022
PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019

Learning Character-Agnostic Motion for Motion Retargeting in 2D We provide PyTorch implementation for our paper Learning Character-Agnostic Motion for

Rundi Wu 367 Dec 22, 2022
机器学习、深度学习、自然语言处理等人工智能基础知识总结。

说明 机器学习、深度学习、自然语言处理基础知识总结。 目前主要参考李航老师的《统计学习方法》一书,也有一些内容例如XGBoost、聚类、深度学习相关内容、NLP相关内容等是书中未提及的。

Peter 445 Dec 12, 2022
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition

MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition Paper: MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition accepted fo

64 Dec 18, 2022
AI that generate music

PianoGPT ai that generate music try it here https://share.streamlit.io/annasajkh/pianogpt/main/main.py or here https://huggingface.co/spaces/Annas/Pia

Annas 28 Nov 27, 2022
Official Implementation for the paper DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification

DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification Official Implementation for the pape

Anh M. Nguyen 36 Dec 28, 2022
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
Genshin-assets - 👧 Public documentation & static assets for Genshin Impact data.

genshin-assets This repo provides easy access to the Genshin Impact assets, primarily for use on static sites. Sources Genshin Optimizer - An Artifact

Zerite Development 5 Nov 22, 2022