Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning

Related tags

Deep Learningskflow
Overview

SkFlow has been moved to Tensorflow.

SkFlow has been moved to http://github.com/tensorflow/tensorflow into contrib folder specifically located here. The development will continue there. Please submit any issues and pull requests to Tensorflow repository instead.

This repository will ramp down, including after next Tensorflow release we will wind down code here. Please see instructions on most recent installation here.

Comments
  • How do I do multilabel image classification?

    How do I do multilabel image classification?

    Do I have to make changes in the multioutput file? I ideally want to train any model, like Inception, on my training data which has multi labels. How do I do that?

    help wanted examples 
    opened by unography 21
  • Add early stopping and reporting based on validation data

    Add early stopping and reporting based on validation data

    This PR allows a user to specify a validation dataset that are used for early stopping (and reporting). The PR was created to address issue 85

    I made changes in 3 places.

    1. The trainer now takes a dictionary containing the validation data (in the same format as the output of the data feeder's get_dict_fn).
    2. The fit method now takes arguments for val_X and val_y. It converts these into the correct format for the trainer.
    3. The example file digits.py now uses early stopping, by supplying val_X and val_y.

    I can add early stopping to other examples if this approach looks good, though their behavior should not otherwise be affected by the current PR.

    cla: yes 
    opened by dansbecker 14
  • Class weight support

    Class weight support

    Hi,

    I am using skflow.ops.dnn to classify two - classes dataset (True and False). The percentage of True example is very small, so I have an imbalanced dataset.

    It seems to me that one way to resolve the issue is to use weighted classes. However, when I look to the implementation of skflow.ops.dnn, I do not know how could I do weighted classes with DNN.

    Is it possible to do that with skflow, or is there another technique to deal with imbalanced dataset problem in skflow?

    Thanks

    enhancement 
    opened by vinhqdang 13
  • Added verbose option

    Added verbose option

    I added an option to control the "verbosity". For this, I added the parameter "verbose" in the init method of the init.py file and to the train function in the trainers.py file. In addition, I passed this argument to the "self._trainer.train()" call in the init file and added a condition to make the prints in the trainer.py file.

    cla: no 
    opened by ivallesp 12
  • Predict batch size default

    Predict batch size default

    This changes the default batch size for prediction to be the same as for training, enabling efficient grid search. Previously GridSearchCV would try to make predictions in a single batch, which could take a lot of memory.

    This also adds a simple example of using skflow with GridSearchCV.

    cla: no 
    opened by mheilman 11
  • Add example accessing of weights

    Add example accessing of weights

    It wasn't clear how to access weights using classifier.get_tensor_value('foo') syntax. This adds some examples for the CNN model. They were figured out by logging the training as though for using TensorBoard, and then running strings on the logfile to look for the right namespace.

    Is there a better way to access these weights? Or to learn their names? The logging must walk through the graph and record these names. Maybe if there were a way to quickly list all the names, that'd be enough for advanced users to figure it out.

    cla: yes 
    opened by dvbuntu 10
  • Plotting neural network built by skflow

    Plotting neural network built by skflow

    Hi,

    Sorry I asked too much.

    I think plotting is always a nice feature. Is it possible right now for skflow (or can we do that through tensorflow directly)?

    opened by vinhqdang 10
  • move monitor and logdir arguments to init

    move monitor and logdir arguments to init

    opened by mheilman 8
  • Exception when running language model example

    Exception when running language model example

    Hi,

    Thanks for making this tool. It will definitely make things easier for NN newcomers.

    I just tried running your language model example and got the following exception:

    Traceback (most recent call last):
      File "test.py", line 84, in <module>
        estimator.fit(X, y)
      File "/Users/aleksandar/tensorflow/lib/python3.5/site-packages/skflow/estimators/base.py", line 243, in fit
        feed_params_fn=self._data_feeder.get_feed_params)
      File "/Users/aleksandar/tensorflow/lib/python3.5/site-packages/skflow/trainer.py", line 114, in train
        feed_dict = feed_dict_fn()
      File "/Users/aleksandar/tensorflow/lib/python3.5/site-packages/skflow/io/data_feeder.py", line 307, in _feed_dict_fn
        inp[i, :] = six.next(self.X)
    StopIteration
    

    I made sure that my python distribution has the correct version of six. I tried running it both in a virtual environment and in a normal Python 3 distro. Any ideas what might be causing this?

    opened by savkov 7
  • another ValidationMonitor with validation(+early stopping) per epoch

    another ValidationMonitor with validation(+early stopping) per epoch

    From what I understand, the existing ValidationMonitor performs validation every [print_steps] steps, and checks for stop condition every [early_stopping_rounds] steps. I'd like to add another ValidationMonitor that performs validation once and checks for stoping condition once every epoch. Is this the recommended practice in machine learning regarding validation and early stopping? I mean I'd like to add a fit process something like this:

    def fit(self, x_train, y_train, x_validate, y_validate):
        while (current_validation_loss < previous_validation_loss):
            estimator.train_one_more_epoch(x_train, y_train)
            previous_validation_loss = current_validation_loss
            current_validation_loss = some_error(y_validate, estimator.predict(x_validate))
    
    enhancement help wanted 
    opened by alanyuchenhou 7
  • Example of language model

    Example of language model

    Add an example of language model (RNN). For example character level on sheikspear book (similar to https://github.com/sherjilozair/char-rnn-tensorflow).

    examples 
    opened by ilblackdragon 7
  • .travis.yml: The 'sudo' tag is now deprecated in Travis CI

    .travis.yml: The 'sudo' tag is now deprecated in Travis CI

    opened by cclauss 1
  • Why hasn't this repo been archived yet?

    Why hasn't this repo been archived yet?

    New versions of TF have already been released since the last commit to this repo. As far as I've understood, after having read the README file of this project, you intended to close this repo. So, why hasn't it been done yet?

    opened by nbro 0
Releases(v0.1)
  • v0.1(Feb 14, 2016)

Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations

Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations Trevor Ablett, Daniel (Yifan) Zhai, Jonatha

STARS Laboratory 3 Feb 01, 2022
Original Pytorch Implementation of FLAME: Facial Landmark Heatmap Activated Multimodal Gaze Estimation

FLAME Original Pytorch Implementation of FLAME: Facial Landmark Heatmap Activated Multimodal Gaze Estimation, accepted at the 17th IEEE Internation Co

Neelabh Sinha 19 Dec 17, 2022
MassiveSumm: a very large-scale, very multilingual, news summarisation dataset

MassiveSumm: a very large-scale, very multilingual, news summarisation dataset This repository contains links to data and code to fetch and reproduce

Daniel Varab 19 Dec 16, 2022
Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On

UPMT Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On See main.py as an example: from model import PopM

7 Sep 01, 2022
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
A simple Tensorflow based library for deep and/or denoising AutoEncoder.

libsdae - deep-Autoencoder & denoising autoencoder A simple Tensorflow based library for Deep autoencoder and denoising AE. Library follows sklearn st

Rajarshee Mitra 147 Nov 18, 2022
Recurrent Neural Network Tutorial, Part 2 - Implementing a RNN in Python and Theano

Please read the blog post that goes with this code! Jupyter Notebook Setup System Requirements: Python, pip (Optional) virtualenv To start the Jupyter

Denny Britz 863 Dec 15, 2022
Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Emile van Krieken 140 Dec 30, 2022
BMVC 2021: This is the github repository for "Few Shot Temporal Action Localization using Query Adaptive Transformers" accepted in British Machine Vision Conference (BMVC) 2021, Virtual

FS-QAT: Few Shot Temporal Action Localization using Query Adaptive Transformer Accepted as Poster in BMVC 2021 This is an official implementation in P

Sauradip Nag 14 Dec 09, 2022
Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets This is the official PyTorch implementation for the paper Rapid Neural A

48 Dec 26, 2022
A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow.

ConvNeXt A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow. A FacebookResearch Implementation on A Conv

Raghvender 2 Feb 14, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking

RDA: Robust Domain Adaptation via Fourier Adversarial Attacking Updates 08/2021: check out our domain adaptation for video segmentation paper Domain A

17 Nov 30, 2022
Simulating an AI playing 2048 using the Expectimax algorithm

2048-expectimax Simulating an AI playing 2048 using the Expectimax algorithm The base game engine uses code from here. The AI player is modeled as a m

Subha Ramesh 2 Jan 31, 2022
Generative code template for PixelBeasts 10k NFT project.

generator-template Generative code template for combining transparent png attributes into 10,000 unique images. Used for the PixelBeasts 10k NFT proje

Yohei Nakajima 9 Aug 24, 2022
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
Unsupervised Attributed Multiplex Network Embedding (AAAI 2020)

Unsupervised Attributed Multiplex Network Embedding (DMGI) Overview Nodes in a multiplex network are connected by multiple types of relations. However

Chanyoung Park 114 Dec 06, 2022
GNEE - GAT Neural Event Embeddings

GNEE - GAT Neural Event Embeddings This repository contains source code for the GNEE (GAT Neural Event Embeddings) method introduced in the paper: "Se

João Pedro Rodrigues Mattos 0 Sep 15, 2021
Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis"

StrengthNet Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis" https://arxiv.org/abs/2110

RuiLiu 65 Dec 20, 2022
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021