The fastai book, published as Jupyter Notebooks

Overview

Binder
English / Spanish / Korean / Chinese / Bengali / Indonesian

The fastai book

These notebooks cover an introduction to deep learning, fastai, and PyTorch. fastai is a layered API for deep learning; for more information, see the fastai paper. Everything in this repo is copyright Jeremy Howard and Sylvain Gugger, 2020 onwards.

These notebooks are used for a MOOC and form the basis of this book, which is currently available for purchase. It does not have the same GPL restrictions that are on this draft.

The code in the notebooks and python .py files is covered by the GPL v3 license; see the LICENSE file for details.

The remainder (including all markdown cells in the notebooks and other prose) is not licensed for any redistribution or change of format or medium, other than making copies of the notebooks or forking this repo for your own private use. No commercial or broadcast use is allowed. We are making these materials freely available to help you learn deep learning, so please respect our copyright and these restrictions.

If you see someone hosting a copy of these materials somewhere else, please let them know that their actions are not allowed and may lead to legal action. Moreover, they would be hurting the community because we're not likely to release additional materials in this way if people ignore our copyright.

This is an early draft. If you get stuck running notebooks, please search the fastai-dev forum for answers, and ask for help there if needed. Please don't use GitHub issues for problems running the notebooks.

If you make any pull requests to this repo, then you are assigning copyright of that work to Jeremy Howard and Sylvain Gugger. (Additionally, if you are making small edits to spelling or text, please specify the name of the file and a very brief description of what you're fixing. It's difficult for reviewers to know which corrections have already been made. Thank you.)

Citations

If you wish to cite the book, you may use the following:

@book{howard2020deep,
title={Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD},
author={Howard, J. and Gugger, S.},
isbn={9781492045526},
url={https://books.google.no/books?id=xd6LxgEACAAJ},
year={2020},
publisher={O'Reilly Media, Incorporated}
}
Comments
  • RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245

    RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245

    RuntimeError Traceback (most recent call last) in 9 10 learn = cnn_learner(dls, resnet34, metrics=error_rate) ---> 11 learn.fine_tune(1)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\callback\schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs) 155 "Fine tune with freeze for freeze_epochs then with unfreeze from epochs using discriminative LR" 156 self.freeze() --> 157 self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs) 158 base_lr /= 2 159 self.unfreeze()

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\callback\schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt) 110 scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final), 111 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))} --> 112 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd) 113 114 # Cell

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt) 190 try: 191 self.epoch=epoch; self('begin_epoch') --> 192 self._do_epoch_train() 193 self._do_epoch_validate() 194 except CancelEpochException: self('after_cancel_epoch')

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\learner.py in _do_epoch_train(self) 163 try: 164 self.dl = self.dls.train; self('begin_train') --> 165 self.all_batches() 166 except CancelTrainException: self('after_cancel_train') 167 finally: self('after_train')

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\learner.py in all_batches(self) 141 def all_batches(self): 142 self.n_iter = len(self.dl) --> 143 for o in enumerate(self.dl): self.one_batch(*o) 144 145 def one_batch(self, i, b):

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\data\load.py in iter(self) 95 self.randomize() 96 self.before_iter() ---> 97 for b in _loadersself.fake_l.num_workers==0: 98 if self.device is not None: b = to_device(b, self.device) 99 yield self.after_batch(b)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py in init(self, loader) 717 # before it starts, and del tries to join but will get: 718 # AssertionError: can only join a started process. --> 719 w.start() 720 self._index_queues.append(index_queue) 721 self._workers.append(w)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\process.py in start(self) 110 'daemonic processes are not allowed to have children' 111 _cleanup() --> 112 self._popen = self._Popen(self) 113 self._sentinel = self._popen.sentinel 114 # Avoid a refcycle if the target function holds an indirect

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj) 221 @staticmethod 222 def _Popen(process_obj): --> 223 return _default_context.get_context().Process._Popen(process_obj) 224 225 class DefaultContext(BaseContext):

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj) 320 def _Popen(process_obj): 321 from .popen_spawn_win32 import Popen --> 322 return Popen(process_obj) 323 324 class SpawnContext(BaseContext):

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py in init(self, process_obj) 63 try: 64 reduction.dump(prep_data, to_child) ---> 65 reduction.dump(process_obj, to_child) 66 finally: 67 set_spawning_popen(None)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py in dump(obj, file, protocol) 58 def dump(obj, file, protocol=None): 59 '''Replacement for pickle.dump() using ForkingPickler.''' ---> 60 ForkingPickler(file, protocol).dump(obj) 61 62 #

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\reductions.py in reduce_tensor(tensor) 240 ref_counter_offset, 241 event_handle, --> 242 event_sync_required) = storage.share_cuda() 243 tensor_offset = tensor.storage_offset() 244 shared_cache[handle] = StorageWeakRef(storage)

    RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245

    opened by ShowTimeJMJ 14
  • 06_multiclass accuracy_multi plot

    06_multiclass accuracy_multi plot

    In cell 35, there is a plot

    image Do I understand right that we have values of activations on the x-axis, not probabilities, as well as we pick sigmoid=False? If so, why do limited with 0 and 1? Is it an unintentional error and they should have wide ranges?

    opened by grayskripko 9
  • IndexError: index 3 is out of bounds for dimension 0 with size 3

    IndexError: index 3 is out of bounds for dimension 0 with size 3

    Got this exception -

    IndexError: index 3 is out of bounds for dimension 0 with size 3

    on a following cell of the intro's notebook

    from fastai.vision.all import *
    path = untar_data(URLs.PETS)/'images'
    
    def is_cat(x): return x[0].isupper()
    dls = ImageDataLoaders.from_name_func(
        path, get_image_files(path), valid_pct=0.2, seed=42,
        label_func=is_cat, item_tfms=Resize(224))
    
    learn = cnn_learner(dls, resnet34, metrics=error_rate)
    learn.fine_tune(1)
    

    Full exception:

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    <command-8165228> in <module>
         11 
         12 learn = cnn_learner(dls, resnet34, metrics=error_rate)
    ---> 13 learn.fine_tune(1)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
        471         init_args.update(log)
        472         setattr(inst, 'init_args', init_args)
    --> 473         return inst if to_return else f(*args, **kwargs)
        474     return _f
        475 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs)
        159     "Fine tune with `freeze` for `freeze_epochs` then with `unfreeze` from `epochs` using discriminative LR"
        160     self.freeze()
    --> 161     self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
        162     base_lr /= 2
        163     self.unfreeze()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
        471         init_args.update(log)
        472         setattr(inst, 'init_args', init_args)
    --> 473         return inst if to_return else f(*args, **kwargs)
        474     return _f
        475 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
        111     scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
        112               'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
    --> 113     self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
        114 
        115 # Cell
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
        471         init_args.update(log)
        472         setattr(inst, 'init_args', init_args)
    --> 473         return inst if to_return else f(*args, **kwargs)
        474     return _f
        475 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
        205             self.opt.set_hypers(lr=self.lr if lr is None else lr)
        206             self.n_epoch,self.loss = n_epoch,tensor(0.)
    --> 207             self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
        208 
        209     def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _do_fit(self)
        195         for epoch in range(self.n_epoch):
        196             self.epoch=epoch
    --> 197             self._with_events(self._do_epoch, 'epoch', CancelEpochException)
        198 
        199     @log_args(but='cbs')
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _do_epoch(self)
        189 
        190     def _do_epoch(self):
    --> 191         self._do_epoch_train()
        192         self._do_epoch_validate()
        193 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _do_epoch_train(self)
        181     def _do_epoch_train(self):
        182         self.dl = self.dls.train
    --> 183         self._with_events(self.all_batches, 'train', CancelTrainException)
        184 
        185     def _do_epoch_validate(self, ds_idx=1, dl=None):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in all_batches(self)
        159     def all_batches(self):
        160         self.n_iter = len(self.dl)
    --> 161         for o in enumerate(self.dl): self.one_batch(*o)
        162 
        163     def _do_one_batch(self):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in one_batch(self, i, b)
        177         self.iter = i
        178         self._split(b)
    --> 179         self._with_events(self._do_one_batch, 'batch', CancelBatchException)
        180 
        181     def _do_epoch_train(self):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in __call__(self, event_name)
        131     def ordered_cbs(self, event): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, event)]
        132 
    --> 133     def __call__(self, event_name): L(event_name).map(self._call_one)
        134 
        135     def _call_one(self, event_name):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
        394              else f.format if isinstance(f,str)
        395              else f.__getitem__)
    --> 396         return self._new(map(g, self))
        397 
        398     def filter(self, f, negate=False, **kwargs):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
        340     @property
        341     def _xtra(self): return None
    --> 342     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
        343     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
        344     def copy(self): return self._new(self.items.copy())
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
         49             return x
         50 
    ---> 51         res = super().__call__(*((x,) + args), **kwargs)
         52         res._newchk = 0
         53         return res
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
        331         if items is None: items = []
        332         if (use_list is not None) or not _is_array(items):
    --> 333             items = list(items) if use_list else _listify(items)
        334         if match is not None:
        335             if is_coll(match): match = len(match)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in _listify(o)
        244     if isinstance(o, list): return o
        245     if isinstance(o, str) or _is_array(o): return [o]
    --> 246     if is_iter(o): return list(o)
        247     return [o]
        248 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
        307             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
        308         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
    --> 309         return self.fn(*fargs, **kwargs)
        310 
        311 # Cell
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _call_one(self, event_name)
        135     def _call_one(self, event_name):
        136         assert hasattr(event, event_name), event_name
    --> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
        138 
        139     def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in <listcomp>(.0)
        135     def _call_one(self, event_name):
        136         assert hasattr(event, event_name), event_name
    --> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
        138 
        139     def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/core.py in __call__(self, event_name)
         42                (self.run_valid and not getattr(self, 'training', False)))
         43         res = None
    ---> 44         if self.run and _run: res = getattr(self, event_name, noop)()
         45         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
         46         return res
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in before_batch(self)
         84     def __init__(self, scheds): self.scheds = scheds
         85     def before_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
    ---> 86     def before_batch(self): self._update_val(self.pct_train)
         87 
         88     def _update_val(self, pct):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in _update_val(self, pct)
         87 
         88     def _update_val(self, pct):
    ---> 89         for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
         90 
         91     def after_batch(self):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in _inner(pos)
         67         if pos == 1.: return scheds[-1](1.)
         68         idx = (pos >= pcts).nonzero().max()
    ---> 69         actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
         70         return scheds[idx](actual_pos.item())
         71     return _inner
    
    IndexError: index 3 is out of bounds for dimension 0 with size 3
    
    bug 
    opened by Tagar 8
  • cosmetic fixes 01

    cosmetic fixes 01

    I'll do this one and check in before happily looking at more

    it wasn't self evident how readers are already in a jupyter notebook and running import and gpu cells, but the material to select a cloud server GPU and get to the notebook was provided to them already / elsewhere?

    I haven't edited some candidates for this, but : can i check if its a significant goal to assist 2nd language readers? If so, we can edit out contractions e.g. We'll changes to we will . I found contractions ridiculously hard when I learnt my other languages. the other guides that help are - really long sentences as a red flag and also try to put active sentences with the important things first. sorry - I know you and Sylvain have written lots and know this - but I'm picturing an overseas high school student, not even a US based grad student from overseas.

    opened by bradsaracik 8
  • search_images_ddg always fails with TimeoutError

    search_images_ddg always fails with TimeoutError

    I've tried to use search_images_ddg from PaperSpace and my local environment. The result is always the same: TimeoutError. I've asked the question at forum and seems I am not the only one with such problem https://forums.fast.ai/t/live-coding-2/96690/55 Looks like it got broken recently

    opened by RomanVolkov 6
  • Wrong syntax in search_images_bing

    Wrong syntax in search_images_bing

    Hello, I am having trouble using the lesson 04. In the first input when I try to run the imports :

    #hide
    !pip install -Uqq fastbook
    import fastbook
    fastbook.setup_book()
    

    I am having the following error:

    File "/usr/local/lib/python3.6/dist-packages/fastbook/__init__.py", line 45
        def search_images_bing(key,earch_images_bing(key, term, min_sz=128, max_images=150):
    SyntaxError: invalid syntax
    

    Could it be a typo in the last release ?

    opened by Rob192 5
  • Azure api change - image search Endpoint change in Azure - 'PermissionDenied' Error Lesson 2

    Azure api change - image search Endpoint change in Azure - 'PermissionDenied' Error Lesson 2

    Hi,

    I can't complete lesson 2 - I have an API key for Bing Images search v7 but it seems this resource changed at the end of October 2020 and this change isn't yet incorporated into the code (Azure update). Azure does say that it will support the older Cognitive Services API for the next 3 years, but unfortunately it appears that is only for pre-existing users as I cannot find a way to subscribe as a new Azure user.

    image

    Fundamentally from a code perspective, it appears that the endpoint changed from that used in the code (fastbook/utils.py):

    def search_images_bing(key, term, min_sz=128):
        client = api('https://api.cognitive.microsoft.com', auth(key))
        return L(client.images.search(query=term, count=150, min_height=min_sz, min_width=min_sz).value)
    

    When I try to run this in the notebook I get a 'PermissionDenied' error which I think is due to this change in APIs from Azure - if so, my theory on the error is this will likely need to be refactored from using the older https://api.cognitive.microsoft.com endpoint to use the new endpoint - https://api.bing.microsoft.com/.. I did attempt a naïve endpoint replacement (wishful thinking I know!) but that unfortunately didn't work.

    I've tried on Collab, Paperspace, attempted a local install (hit other issues there) and tried a million different ways to get a Bing search API key via Cognitive Services but it does appear the guidance included on the forum here (referenced on the main fast.ai course page here) is no longer applicable (or I've done something silly - hopefully!).

    Unsure how to continue in this lesson without this resource - safe enough to move onto the next? Or perhaps I've done something very incorrect and feedback would be appreciated.

    Thanks for any advice/guidance/fixes! Rebecca

    opened by bkaCodes 5
  • 02_production: Updates needed for new Bing Image Search API

    02_production: Updates needed for new Bing Image Search API

    Hi FastAI team,

    There have been some recent updates to the Bing Image Search API, and 02_production.ipynb no longer works out of the box as a result. There has been some discussion about this over on the forums over the past few months:

    https://forums.fast.ai/t/02-production-permissiondenied-error/65823/19

    I don't see any pull requests created yet, so I went ahead and proposed the following changes:

    1. Update search_images_bing() function in utils.py
    2. Update the call to download_images in 02_production.ipynb
    opened by joshkraft 5
  • 09_tabular.ipynb: NameError: name 'file_extract' is not defined

    09_tabular.ipynb: NameError: name 'file_extract' is not defined

    When running notebook 9, there's no file_extract function.

    NameError                                 Traceback (most recent call last)
    /tmp/ipykernel_77010/3628020118.py in <module>
          2     path.mkdir(parents=true)
          3     api.competition_download_cli('bluebook-for-bulldozers', path=path)
    ----> 4     file_extract(path/'bluebook-for-bulldozers.zip')
          5 
          6 path.ls(file_type='text')
    
    NameError: name 'file_extract' is not defined
    

    You can fix it manually:

    cd ~/.fastai/archive/bluebook
    unzip bluebook-for-bulldozers.zip
    
    opened by erg 4
  • 04_minst_basics 'An End-to-End SGD Example/Step 6' - subplots do not converge when GPU enabled

    04_minst_basics 'An End-to-End SGD Example/Step 6' - subplots do not converge when GPU enabled

    If I load 04_mnist_basics.ipynb into Google Colab, the sub-plots for 'An End-to-End SGD Example/Step 6' look like this in the saved outputs:

    image

    If I run up-to and including the same cell with the GPU disabled:

    image

    Enable the GPU and reset, run from the beginning:

    image

    opened by pjgoodall 4
  • 09_tabular.ipynb: crashes when creating a `TabularPandas` using `Normalize`

    09_tabular.ipynb: crashes when creating a `TabularPandas` using `Normalize`

    Hi! I'm using a fresh checkout of 09_tabular.ipynb, clearing the notebook and running from the start. When I get to the following cell in the neural network section, it crashes:

    procs_nn = [Categorify, FillMissing, Normalize]
    to_nn = TabularPandas(df_nn_final, procs_nn, cat_nn, cont_nn,
                          splits=splits, y_names=dep_var)
    

    The error message (long, sorry) is as follows:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-105-9827c0e691d0> in <module>
          1 procs_nn = [Categorify, FillMissing, Normalize]
    ----> 2 to_nn = TabularPandas(df_nn_final, procs_nn, cat_nn, cont_nn,
          3                       splits=splits, y_names=dep_var)
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in __init__(self, df, procs, cat_names, cont_names, y_names, y_block, splits, do_setup, device, inplace, reduce_memory)
        164         self.cat_names,self.cont_names,self.procs = L(cat_names),L(cont_names),Pipeline(procs)
        165         self.split = len(df) if splits is None else len(splits[0])
    --> 166         if do_setup: self.setup()
        167 
        168     def new(self, df):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in setup(self)
        175     def decode_row(self, row): return self.new(pd.DataFrame(row).T).decode().items.iloc[0]
        176     def show(self, max_n=10, **kwargs): display_df(self.new(self.all_cols[:max_n]).decode().items)
    --> 177     def setup(self): self.procs.setup(self)
        178     def process(self): self.procs(self)
        179     def loc(self): return self.items.loc
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in setup(self, items, train_setup)
        190         tfms = self.fs[:]
        191         self.fs.clear()
    --> 192         for t in tfms: self.add(t,items, train_setup)
        193 
        194     def add(self,t, items=None, train_setup=False):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in add(self, t, items, train_setup)
        193 
        194     def add(self,t, items=None, train_setup=False):
    --> 195         t.setup(items, train_setup)
        196         self.fs.append(t)
        197 
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in setup(self, items, train_setup)
         77     def setup(self, items=None, train_setup=False):
         78         train_setup = train_setup if self.train_setup is None else self.train_setup
    ---> 79         return self.setups(getattr(items, 'train', items) if train_setup else items)
         80 
         81     def _call(self, fn, x, split_idx=None, **kwargs):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
        116         elif self.inst is not None: f = MethodType(f, self.inst)
        117         elif self.owner is not None: f = MethodType(f, self.owner)
    --> 118         return f(*args, **kwargs)
        119 
        120     def __get__(self, inst, owner):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in setups(self, to)
        271     store_attr(but='to', means=dict(getattr(to, 'train', to).conts.mean()),
        272                stds=dict(getattr(to, 'train', to).conts.std(ddof=0)+1e-7))
    --> 273     return self(to)
        274 
        275 @Normalize
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in __call__(self, x, **kwargs)
         71     @property
         72     def name(self): return getattr(self, '_name', _get_name(self))
    ---> 73     def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
         74     def decode  (self, x, **kwargs): return self._call('decodes', x, **kwargs)
         75     def __repr__(self): return f'{self.name}:\nencodes: {self.encodes}decodes: {self.decodes}'
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in _call(self, fn, x, split_idx, **kwargs)
         81     def _call(self, fn, x, split_idx=None, **kwargs):
         82         if split_idx!=self.split_idx and self.split_idx is not None: return x
    ---> 83         return self._do_call(getattr(self, fn), x, **kwargs)
         84 
         85     def _do_call(self, f, x, **kwargs):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in _do_call(self, f, x, **kwargs)
         87             if f is None: return x
         88             ret = f.returns(x) if hasattr(f,'returns') else None
    ---> 89             return retain_type(f(x, **kwargs), x, ret)
         90         res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
         91         return retain_type(res, x)
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
        116         elif self.inst is not None: f = MethodType(f, self.inst)
        117         elif self.owner is not None: f = MethodType(f, self.owner)
    --> 118         return f(*args, **kwargs)
        119 
        120     def __get__(self, inst, owner):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in encodes(self, to)
        275 @Normalize
        276 def encodes(self, to:Tabular):
    --> 277     to.conts = (to.conts-self.means) / self.stds
        278     return to
        279 
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/pandas/core/ops/__init__.py in f(self, other, axis, level, fill_value)
        649         # TODO: why are we passing flex=True instead of flex=not special?
        650         #  15 tests fail if we pass flex=not special instead
    --> 651         self, other = _align_method_FRAME(self, other, axis, flex=True, level=level)
        652 
        653         if isinstance(other, ABCDataFrame):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/pandas/core/ops/__init__.py in _align_method_FRAME(left, right, axis, flex, level)
        501     elif is_list_like(right) and not isinstance(right, (ABCSeries, ABCDataFrame)):
        502         # GH17901
    --> 503         right = to_series(right)
        504 
        505     if flex is not None and isinstance(right, ABCDataFrame):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/pandas/core/ops/__init__.py in to_series(right)
        463         else:
        464             if len(left.columns) != len(right):
    --> 465                 raise ValueError(
        466                     msg.format(req_len=len(left.columns), given_len=len(right))
        467                 )
    
    ValueError: Unable to coerce to Series, length must be 1: given 0
    

    I don't know where this is coming from (despite the long backtrace).

    opened by juliangilbey 4
  • bugfix to 15_arch_details.ipynb create_body call

    bugfix to 15_arch_details.ipynb create_body call

    I run this notebook in Google colab and got an error when using simply the model name as the input to create_body function. Since children can be called from the function form, I added the parentheses in the notebook, so encoder = create_body(resnet34, cut=-2) became encoder = create_body(resnet34(), cut=-2) and the notebook runs fine. Except it doesn't train up to 94% accuracy.

    opened by KikiCS 1
  • Chapter 2 has missing reference, and instead has a number of <> placeholders

    Chapter 2 has missing reference, and instead has a number of <> placeholders

    Seems like 02_production.ipynb suppose to have something instead of the <> placeholders Maybe a reference to chapter one or the first lesson should go here?

    The six lines of code we saw in <> are just one small part of the process of using deep learning in practice. In this chapter, we're going to use a computer vision example to look at the end-to-end process of creating a deep learning application. More specifically, we're going to build a bear classifier! In the process, we'll discuss the capabilities and constraints of deep learning, explore how to create datasets, look at possible gotchas when using deep learning in practice, and more. Many of the key points will apply equally well to other deep learning problems, such as those in <>. If you work through a problem similar in key respects to our example problems, we expect you to get excellent results with little code, quickly.

    and this one as well:

    There are many domains in which deep learning has not been used to analyze images yet, but those where it has been tried have nearly universally shown that computers can recognize what items are in an image at least as well as people can—even specially trained people, such as radiologists. This is known as object recognition. Deep learning is also good at recognizing where objects in an image are, and can highlight their locations and name each found object. This is known as object detection (there is also a variant of this that we saw in <>, where every pixel

    opened by delbarital 1
  • Fixed name of paper author and link to paper

    Fixed name of paper author and link to paper

    Fixed author name (it's Ron Kohavi) and associated the link with the paper, since the link goes to the paper, not the dataset.

    In the diff, I'm not sure if there's a way to avoid the language_info part.

    opened by cgoldammer 1
Releases(0.0.19)
Owner
fast.ai
fast.ai
A collection of inference modules for fastai2

fastinference A collection of inference modules for fastai including inference speedup and interpretability Install pip install fastinference There ar

Zachary Mueller 83 Oct 10, 2022
Deep Learning Training Scripts With Python

Deep Learning Training Scripts DNN Frameworks Caffe PyTorch Tensorflow CNN Models VGG ResNet DenseNet Inception Language Modeling GatedCNN-LM Attentio

Multicore Computing Research Lab 16 Dec 15, 2022
GUI for TOAD-GAN, a PCG-ML algorithm for Token-based Super Mario Bros. Levels.

If you are using this code in your own project, please cite our paper: @inproceedings{awiszus2020toadgan, title={TOAD-GAN: Coherent Style Level Gene

Maren A. 13 Dec 14, 2022
Boundary-aware Transformers for Skin Lesion Segmentation

Boundary-aware Transformers for Skin Lesion Segmentation Introduction This is an official release of the paper Boundary-aware Transformers for Skin Le

Jiacheng Wang 79 Dec 16, 2022
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]

Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [BCNet, CVPR 2021] This is the official pytorch implementation of BCNet built on

Lei Ke 434 Dec 01, 2022
Implementation of popular bandit algorithms in batch environments.

batch-bandits Implementation of popular bandit algorithms in batch environments. Source code to our paper "The Impact of Batch Learning in Stochastic

Danil Provodin 2 Sep 11, 2022
CVPR2020 Counterfactual Samples Synthesizing for Robust VQA

CVPR2020 Counterfactual Samples Synthesizing for Robust VQA This repo contains code for our paper "Counterfactual Samples Synthesizing for Robust Visu

72 Dec 22, 2022
NAS-FCOS: Fast Neural Architecture Search for Object Detection (CVPR 2020)

NAS-FCOS: Fast Neural Architecture Search for Object Detection This project hosts the train and inference code with pretrained model for implementing

Ning Wang 180 Dec 06, 2022
IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation

IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation Independent Encoder for Deep

30 Nov 05, 2022
Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models

LMPBT Supplementary code for the Paper entitled ``Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models"

1 Sep 29, 2022
Instance-conditional Knowledge Distillation for Object Detection

Instance-conditional Knowledge Distillation for Object Detection This is a MegEngine implementation of the paper "Instance-conditional Knowledge Disti

MEGVII Research 47 Nov 17, 2022
DeepAL: Deep Active Learning in Python

DeepAL: Deep Active Learning in Python Python implementations of the following active learning algorithms: Random Sampling Least Confidence [1] Margin

Kuan-Hao Huang 583 Jan 03, 2023
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 08, 2023
HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval

HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval [toc] 1. Introduction This repository provides the code for our paper at

13 Dec 08, 2022
Learning to Reconstruct 3D Manhattan Wireframes from a Single Image

Learning to Reconstruct 3D Manhattan Wireframes From a Single Image This repository contains the PyTorch implementation of the paper: Yichao Zhou, Hao

Yichao Zhou 50 Dec 27, 2022
PyTorch code for JEREX: Joint Entity-Level Relation Extractor

JEREX: "Joint Entity-Level Relation Extractor" PyTorch code for JEREX: "Joint Entity-Level Relation Extractor". For a description of the model and exp

LAVIS - NLP Working Group 50 Dec 01, 2022
This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT).

Dynamic-Vision-Transformer (Pytorch) This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT). Not All Ima

210 Dec 18, 2022
Reproduced Code for Image Forgery Detection papers.

Image Forgery Detection With over 4.5 billion active internet users, the amount of multimedia content being shared every day has surpassed everyone’s

Umar Masud 15 Dec 06, 2022
Improving Factual Consistency of Abstractive Text Summarization

Improving Factual Consistency of Abstractive Text Summarization We provide the code for the papers: "Entity-level Factual Consistency of Abstractive T

61 Nov 27, 2022