Dataset Viewer
arxiv_id
float64 1.5k
2.51k
| title
stringlengths 9
178
⌀ | authors
stringlengths 2
22.8k
| categories
stringlengths 4
146
| summary
stringlengths 103
1.92k
⌀ | published
stringdate 2015-02-06 10:44:00
2025-07-10 17:59:58
⌀ | comments
stringlengths 2
417
⌀ | journal_ref
stringclasses 321
values | doi
stringclasses 398
values | ss_title
stringlengths 8
159
⌀ | ss_authors
stringlengths 11
8.38k
⌀ | ss_year
float64 2.02k
2.03k
⌀ | ss_venue
stringclasses 281
values | ss_citationCount
float64 0
134k
⌀ | ss_referenceCount
float64 0
429
⌀ | ss_fieldsOfStudy
stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,502.01852
|
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
ImageNet Classification
|
['Kaiming He', 'Xiangyu Zhang', 'Shaoqing Ren', 'Jian Sun']
|
['cs.CV', 'cs.AI', 'cs.LG']
|
Rectified activation units (rectifiers) are essential for state-of-the-art
neural networks. In this work, we study rectifier neural networks for image
classification from two aspects. First, we propose a Parametric Rectified
Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU
improves model fitting with nearly zero extra computational cost and little
overfitting risk. Second, we derive a robust initialization method that
particularly considers the rectifier nonlinearities. This method enables us to
train extremely deep rectified models directly from scratch and to investigate
deeper or wider network architectures. Based on our PReLU networks
(PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012
classification dataset. This is a 26% relative improvement over the ILSVRC 2014
winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass
human-level performance (5.1%, Russakovsky et al.) on this visual recognition
challenge.
|
2015-02-06T10:44:00Z
| null | null | null | null | null | null | null | null | null | null |
1,502.03044
|
Show, Attend and Tell: Neural Image Caption Generation with Visual
Attention
|
['Kelvin Xu', 'Jimmy Ba', 'Ryan Kiros', 'Kyunghyun Cho', 'Aaron Courville', 'Ruslan Salakhutdinov', 'Richard Zemel', 'Yoshua Bengio']
|
['cs.LG', 'cs.CV']
|
Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO.
|
2015-02-10T19:18:29Z
| null | null | null | null | null | null | null | null | null | null |
1,502.05698
|
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
|
['Jason Weston', 'Antoine Bordes', 'Sumit Chopra', 'Alexander M. Rush', 'Bart van Merriënboer', 'Armand Joulin', 'Tomas Mikolov']
|
['cs.AI', 'cs.CL', 'stat.ML']
|
One long-term goal of machine learning research is to produce methods that
are applicable to reasoning and natural language, in particular building an
intelligent dialogue agent. To measure progress towards that goal, we argue for
the usefulness of a set of proxy tasks that evaluate reading comprehension via
question answering. Our tasks measure understanding in several ways: whether a
system is able to answer questions via chaining facts, simple induction,
deduction and many more. The tasks are designed to be prerequisites for any
system that aims to be capable of conversing with a human. We believe many
existing learning systems can currently not solve them, and hence our aim is to
classify these tasks into skill sets, so that researchers can identify (and
then rectify) the failings of their systems. We also extend and improve the
recently introduced Memory Networks model, and show it is able to solve some,
but not all, of the tasks.
|
2015-02-19T20:46:10Z
| null | null | null | null | null | null | null | null | null | null |
1,503.02531
|
Distilling the Knowledge in a Neural Network
|
['Geoffrey Hinton', 'Oriol Vinyals', 'Jeff Dean']
|
['stat.ML', 'cs.LG', 'cs.NE']
|
A very simple way to improve the performance of almost any machine learning
algorithm is to train many different models on the same data and then to
average their predictions. Unfortunately, making predictions using a whole
ensemble of models is cumbersome and may be too computationally expensive to
allow deployment to a large number of users, especially if the individual
models are large neural nets. Caruana and his collaborators have shown that it
is possible to compress the knowledge in an ensemble into a single model which
is much easier to deploy and we develop this approach further using a different
compression technique. We achieve some surprising results on MNIST and we show
that we can significantly improve the acoustic model of a heavily used
commercial system by distilling the knowledge in an ensemble of models into a
single model. We also introduce a new type of ensemble composed of one or more
full models and many specialist models which learn to distinguish fine-grained
classes that the full models confuse. Unlike a mixture of experts, these
specialist models can be trained rapidly and in parallel.
|
2015-03-09T15:44:49Z
|
NIPS 2014 Deep Learning Workshop
| null | null |
Distilling the Knowledge in a Neural Network
|
['Geoffrey E. Hinton', 'O. Vinyals', 'J. Dean']
| 2,015
|
arXiv.org
| 19,824
| 9
|
['Mathematics', 'Computer Science']
|
1,503.03832
|
FaceNet: A Unified Embedding for Face Recognition and Clustering
|
['Florian Schroff', 'Dmitry Kalenichenko', 'James Philbin']
|
['cs.CV']
|
Despite significant recent advances in the field of face recognition,
implementing face verification and recognition efficiently at scale presents
serious challenges to current approaches. In this paper we present a system,
called FaceNet, that directly learns a mapping from face images to a compact
Euclidean space where distances directly correspond to a measure of face
similarity. Once this space has been produced, tasks such as face recognition,
verification and clustering can be easily implemented using standard techniques
with FaceNet embeddings as feature vectors.
Our method uses a deep convolutional network trained to directly optimize the
embedding itself, rather than an intermediate bottleneck layer as in previous
deep learning approaches. To train, we use triplets of roughly aligned matching
/ non-matching face patches generated using a novel online triplet mining
method. The benefit of our approach is much greater representational
efficiency: we achieve state-of-the-art face recognition performance using only
128-bytes per face.
On the widely used Labeled Faces in the Wild (LFW) dataset, our system
achieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves
95.12%. Our system cuts the error rate in comparison to the best published
result by 30% on both datasets.
We also introduce the concept of harmonic embeddings, and a harmonic triplet
loss, which describe different versions of face embeddings (produced by
different networks) that are compatible to each other and allow for direct
comparison between each other.
|
2015-03-12T18:10:53Z
|
Also published, in Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition 2015
| null |
10.1109/CVPR.2015.7298682
|
FaceNet: A unified embedding for face recognition and clustering
|
['Florian Schroff', 'Dmitry Kalenichenko', 'James Philbin']
| 2,015
|
Computer Vision and Pattern Recognition
| 13,210
| 24
|
['Computer Science']
|
1,504.00325
|
Microsoft COCO Captions: Data Collection and Evaluation Server
|
['Xinlei Chen', 'Hao Fang', 'Tsung-Yi Lin', 'Ramakrishna Vedantam', 'Saurabh Gupta', 'Piotr Dollar', 'C. Lawrence Zitnick']
|
['cs.CV', 'cs.CL']
|
In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided.
|
2015-04-01T18:13:43Z
|
arXiv admin note: text overlap with arXiv:1411.4952
| null | null | null | null | null | null | null | null | null |
1,504.06375
|
Holistically-Nested Edge Detection
|
['Saining Xie', 'Zhuowen Tu']
|
['cs.CV']
|
We develop a new edge detection algorithm that tackles two important issues
in this long-standing vision problem: (1) holistic image training and
prediction; and (2) multi-scale and multi-level feature learning. Our proposed
method, holistically-nested edge detection (HED), performs image-to-image
prediction by means of a deep learning model that leverages fully convolutional
neural networks and deeply-supervised nets. HED automatically learns rich
hierarchical representations (guided by deep supervision on side responses)
that are important in order to approach the human ability resolve the
challenging ambiguity in edge and object boundary detection. We significantly
advance the state-of-the-art on the BSD500 dataset (ODS F-score of .782) and
the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed
(0.4 second per image) that is orders of magnitude faster than some recent
CNN-based edge detection algorithms.
|
2015-04-24T02:12:15Z
|
v2 Add appendix A for updated results (ODS=0.790) on BSDS-500 in a
new experiment setting. Fix typos and reorganize formulations. Add Table 2 to
discuss the role of deep supervision. Add links to publicly available
repository for code, models and data
| null | null |
Holistically-Nested Edge Detection
|
['Saining Xie', 'Z. Tu']
| 2,015
|
International Journal of Computer Vision
| 3,503
| 59
|
['Computer Science']
|
1,504.08083
|
Fast R-CNN
|
['Ross Girshick']
|
['cs.CV']
|
This paper proposes a Fast Region-based Convolutional Network method (Fast
R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently
classify object proposals using deep convolutional networks. Compared to
previous work, Fast R-CNN employs several innovations to improve training and
testing speed while also increasing detection accuracy. Fast R-CNN trains the
very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and
achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains
VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is
implemented in Python and C++ (using Caffe) and is available under the
open-source MIT License at https://github.com/rbgirshick/fast-rcnn.
|
2015-04-30T05:13:08Z
|
To appear in ICCV 2015
| null | null |
Fast R-CNN
|
['Ross B. Girshick']
| 2,015
| null | 25,181
| 23
|
['Computer Science']
|
1,505.04597
|
U-Net: Convolutional Networks for Biomedical Image Segmentation
|
['Olaf Ronneberger', 'Philipp Fischer', 'Thomas Brox']
|
['cs.CV']
|
There is large consent that successful training of deep networks requires
many thousand annotated training samples. In this paper, we present a network
and training strategy that relies on the strong use of data augmentation to use
the available annotated samples more efficiently. The architecture consists of
a contracting path to capture context and a symmetric expanding path that
enables precise localization. We show that such a network can be trained
end-to-end from very few images and outperforms the prior best method (a
sliding-window convolutional network) on the ISBI challenge for segmentation of
neuronal structures in electron microscopic stacks. Using the same network
trained on transmitted light microscopy images (phase contrast and DIC) we won
the ISBI cell tracking challenge 2015 in these categories by a large margin.
Moreover, the network is fast. Segmentation of a 512x512 image takes less than
a second on a recent GPU. The full implementation (based on Caffe) and the
trained networks are available at
http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
|
2015-05-18T11:28:37Z
|
conditionally accepted at MICCAI 2015
| null | null | null | null | null | null | null | null | null |
1,505.0487
|
Flickr30k Entities: Collecting Region-to-Phrase Correspondences for
Richer Image-to-Sentence Models
|
['Bryan A. Plummer', 'Liwei Wang', 'Chris M. Cervantes', 'Juan C. Caicedo', 'Julia Hockenmaier', 'Svetlana Lazebnik']
|
['cs.CV', 'cs.CL']
|
The Flickr30k dataset has become a standard benchmark for sentence-based
image description. This paper presents Flickr30k Entities, which augments the
158k captions from Flickr30k with 244k coreference chains, linking mentions of
the same entities across different captions for the same image, and associating
them with 276k manually annotated bounding boxes. Such annotations are
essential for continued progress in automatic image description and grounded
language understanding. They enable us to define a new benchmark for
localization of textual entity mentions in an image. We present a strong
baseline for this task that combines an image-text embedding, detectors for
common objects, a color classifier, and a bias towards selecting larger
objects. While our baseline rivals in accuracy more complex state-of-the-art
models, we show that its gains cannot be easily parlayed into improvements on
such tasks as image-sentence retrieval, thus underlining the limitations of
current methods and the need for further research.
|
2015-05-19T04:46:03Z
| null | null | null | null | null | null | null | null | null | null |
1,506.01497
|
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal
Networks
|
['Shaoqing Ren', 'Kaiming He', 'Ross Girshick', 'Jian Sun']
|
['cs.CV']
|
State-of-the-art object detection networks depend on region proposal
algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN
have reduced the running time of these detection networks, exposing region
proposal computation as a bottleneck. In this work, we introduce a Region
Proposal Network (RPN) that shares full-image convolutional features with the
detection network, thus enabling nearly cost-free region proposals. An RPN is a
fully convolutional network that simultaneously predicts object bounds and
objectness scores at each position. The RPN is trained end-to-end to generate
high-quality region proposals, which are used by Fast R-CNN for detection. We
further merge RPN and Fast R-CNN into a single network by sharing their
convolutional features---using the recently popular terminology of neural
networks with 'attention' mechanisms, the RPN component tells the unified
network where to look. For the very deep VGG-16 model, our detection system has
a frame rate of 5fps (including all steps) on a GPU, while achieving
state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS
COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015
competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning
entries in several tracks. Code has been made publicly available.
|
2015-06-04T07:58:34Z
|
Extended tech report
| null | null |
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
|
['Shaoqing Ren', 'Kaiming He', 'Ross B. Girshick', 'Jian Sun']
| 2,015
|
IEEE Transactions on Pattern Analysis and Machine Intelligence
| 62,776
| 47
|
['Computer Science', 'Medicine']
|
1,506.02025
|
Spatial Transformer Networks
|
['Max Jaderberg', 'Karen Simonyan', 'Andrew Zisserman', 'Koray Kavukcuoglu']
|
['cs.CV']
|
Convolutional Neural Networks define an exceptionally powerful class of
models, but are still limited by the lack of ability to be spatially invariant
to the input data in a computationally and parameter efficient manner. In this
work we introduce a new learnable module, the Spatial Transformer, which
explicitly allows the spatial manipulation of data within the network. This
differentiable module can be inserted into existing convolutional
architectures, giving neural networks the ability to actively spatially
transform feature maps, conditional on the feature map itself, without any
extra training supervision or modification to the optimisation process. We show
that the use of spatial transformers results in models which learn invariance
to translation, scale, rotation and more generic warping, resulting in
state-of-the-art performance on several benchmarks, and for a number of classes
of transformations.
|
2015-06-05T19:54:26Z
| null | null | null |
Spatial Transformer Networks
|
['Max Jaderberg', 'K. Simonyan', 'Andrew Zisserman', 'K. Kavukcuoglu']
| 2,015
|
Neural Information Processing Systems
| 7,417
| 42
|
['Computer Science', 'Mathematics']
|
1,506.0264
|
You Only Look Once: Unified, Real-Time Object Detection
|
['Joseph Redmon', 'Santosh Divvala', 'Ross Girshick', 'Ali Farhadi']
|
['cs.CV']
|
We present YOLO, a new approach to object detection. Prior work on object
detection repurposes classifiers to perform detection. Instead, we frame object
detection as a regression problem to spatially separated bounding boxes and
associated class probabilities. A single neural network predicts bounding boxes
and class probabilities directly from full images in one evaluation. Since the
whole detection pipeline is a single network, it can be optimized end-to-end
directly on detection performance.
Our unified architecture is extremely fast. Our base YOLO model processes
images in real-time at 45 frames per second. A smaller version of the network,
Fast YOLO, processes an astounding 155 frames per second while still achieving
double the mAP of other real-time detectors. Compared to state-of-the-art
detection systems, YOLO makes more localization errors but is far less likely
to predict false detections where nothing exists. Finally, YOLO learns very
general representations of objects. It outperforms all other detection methods,
including DPM and R-CNN, by a wide margin when generalizing from natural images
to artwork on both the Picasso Dataset and the People-Art Dataset.
|
2015-06-08T19:52:52Z
| null | null | null | null | null | null | null | null | null | null |
1,506.03365
|
LSUN: Construction of a Large-scale Image Dataset using Deep Learning
with Humans in the Loop
|
['Fisher Yu', 'Ari Seff', 'Yinda Zhang', 'Shuran Song', 'Thomas Funkhouser', 'Jianxiong Xiao']
|
['cs.CV']
|
While there has been remarkable progress in the performance of visual
recognition algorithms, the state-of-the-art models tend to be exceptionally
data-hungry. Large labeled training datasets, expensive and tedious to produce,
are required to optimize millions of parameters in deep network models. Lagging
behind the growth in model capacity, the available datasets are quickly
becoming outdated in terms of size and density. To circumvent this bottleneck,
we propose to amplify human effort through a partially automated labeling
scheme, leveraging deep learning with humans in the loop. Starting from a large
set of candidate images for each category, we iteratively sample a subset, ask
people to label them, classify the others with a trained model, split the set
into positives, negatives, and unlabeled based on the classification
confidence, and then iterate with the unlabeled set. To assess the
effectiveness of this cascading procedure and enable further progress in visual
recognition research, we construct a new image dataset, LSUN. It contains
around one million labeled images for each of 10 scene categories and 20 object
categories. We experiment with training popular convolutional networks and find
that they achieve substantial performance gains when trained on this dataset.
|
2015-06-10T15:38:47Z
| null | null | null |
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
|
['F. Yu', 'Yinda Zhang', 'Shuran Song', 'Ari Seff', 'Jianxiong Xiao']
| 2,015
|
arXiv.org
| 2,350
| 28
|
['Computer Science']
|
1,507.05717
|
An End-to-End Trainable Neural Network for Image-based Sequence
Recognition and Its Application to Scene Text Recognition
|
['Baoguang Shi', 'Xiang Bai', 'Cong Yao']
|
['cs.CV']
|
Image-based sequence recognition has been a long-standing research topic in
computer vision. In this paper, we investigate the problem of scene text
recognition, which is among the most important and challenging tasks in
image-based sequence recognition. A novel neural network architecture, which
integrates feature extraction, sequence modeling and transcription into a
unified framework, is proposed. Compared with previous systems for scene text
recognition, the proposed architecture possesses four distinctive properties:
(1) It is end-to-end trainable, in contrast to most of the existing algorithms
whose components are separately trained and tuned. (2) It naturally handles
sequences in arbitrary lengths, involving no character segmentation or
horizontal scale normalization. (3) It is not confined to any predefined
lexicon and achieves remarkable performances in both lexicon-free and
lexicon-based scene text recognition tasks. (4) It generates an effective yet
much smaller model, which is more practical for real-world application
scenarios. The experiments on standard benchmarks, including the IIIT-5K,
Street View Text and ICDAR datasets, demonstrate the superiority of the
proposed algorithm over the prior arts. Moreover, the proposed algorithm
performs well in the task of image-based music score recognition, which
evidently verifies the generality of it.
|
2015-07-21T06:26:32Z
|
5 figures
| null | null | null | null | null | null | null | null | null |
1,508.00305
|
Compositional Semantic Parsing on Semi-Structured Tables
|
['Panupong Pasupat', 'Percy Liang']
|
['cs.CL']
|
Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available.
|
2015-08-03T02:53:01Z
| null | null | null | null | null | null | null | null | null | null |
1,508.01991
|
Bidirectional LSTM-CRF Models for Sequence Tagging
|
['Zhiheng Huang', 'Wei Xu', 'Kai Yu']
|
['cs.CL']
|
In this paper, we propose a variety of Long Short-Term Memory (LSTM) based
models for sequence tagging. These models include LSTM networks, bidirectional
LSTM (BI-LSTM) networks, LSTM with a Conditional Random Field (CRF) layer
(LSTM-CRF) and bidirectional LSTM with a CRF layer (BI-LSTM-CRF). Our work is
the first to apply a bidirectional LSTM CRF (denoted as BI-LSTM-CRF) model to
NLP benchmark sequence tagging data sets. We show that the BI-LSTM-CRF model
can efficiently use both past and future input features thanks to a
bidirectional LSTM component. It can also use sentence level tag information
thanks to a CRF layer. The BI-LSTM-CRF model can produce state of the art (or
close to) accuracy on POS, chunking and NER data sets. In addition, it is
robust and has less dependence on word embedding as compared to previous
observations.
|
2015-08-09T06:32:47Z
| null | null | null |
Bidirectional LSTM-CRF Models for Sequence Tagging
|
['Zhiheng Huang', 'W. Xu', 'Kai Yu']
| 2,015
|
arXiv.org
| 4,042
| 35
|
['Computer Science']
|
1,508.05326
|
A large annotated corpus for learning natural language inference
|
['Samuel R. Bowman', 'Gabor Angeli', 'Christopher Potts', 'Christopher D. Manning']
|
['cs.CL']
|
Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time.
|
2015-08-21T16:17:01Z
|
To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/
| null | null | null | null | null | null | null | null | null |
1,508.07909
|
Neural Machine Translation of Rare Words with Subword Units
|
['Rico Sennrich', 'Barry Haddow', 'Alexandra Birch']
|
['cs.CL']
|
Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively.
|
2015-08-31T16:37:31Z
|
accepted at ACL 2016; new in this version: figure 3
| null | null |
Neural Machine Translation of Rare Words with Subword Units
|
['Rico Sennrich', 'B. Haddow', 'Alexandra Birch']
| 2,015
|
Annual Meeting of the Association for Computational Linguistics
| 7,779
| 42
|
['Computer Science']
|
1,509.00519
|
Importance Weighted Autoencoders
|
['Yuri Burda', 'Roger Grosse', 'Ruslan Salakhutdinov']
|
['cs.LG', 'stat.ML']
|
The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently
proposed generative model pairing a top-down generative network with a
bottom-up recognition network which approximates posterior inference. It
typically makes strong assumptions about posterior inference, for instance that
the posterior distribution is approximately factorial, and that its parameters
can be approximated with nonlinear regression from the observations. As we show
empirically, the VAE objective can lead to overly simplified representations
which fail to use the network's entire modeling capacity. We present the
importance weighted autoencoder (IWAE), a generative model with the same
architecture as the VAE, but which uses a strictly tighter log-likelihood lower
bound derived from importance weighting. In the IWAE, the recognition network
uses multiple samples to approximate the posterior, giving it increased
flexibility to model complex posteriors which do not fit the VAE modeling
assumptions. We show empirically that IWAEs learn richer latent space
representations than VAEs, leading to improved test log-likelihood on density
estimation benchmarks.
|
2015-09-01T22:33:13Z
|
Submitted to ICLR 2015
| null | null | null | null | null | null | null | null | null |
1,510.03055
|
A Diversity-Promoting Objective Function for Neural Conversation Models
|
['Jiwei Li', 'Michel Galley', 'Chris Brockett', 'Jianfeng Gao', 'Bill Dolan']
|
['cs.CL']
|
Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations.
|
2015-10-11T14:04:57Z
|
In. Proc of NAACL 2016
| null | null |
A Diversity-Promoting Objective Function for Neural Conversation Models
|
['Jiwei Li', 'Michel Galley', 'Chris Brockett', 'Jianfeng Gao', 'W. Dolan']
| 2,015
|
North American Chapter of the Association for Computational Linguistics
| 2,407
| 49
|
['Computer Science']
|
1,510.08484
|
MUSAN: A Music, Speech, and Noise Corpus
|
['David Snyder', 'Guoguo Chen', 'Daniel Povey']
|
['cs.SD']
|
This report introduces a new corpus of music, speech, and noise. This dataset
is suitable for training models for voice activity detection (VAD) and
music/speech discrimination. Our corpus is released under a flexible Creative
Commons license. The dataset consists of music from several genres, speech from
twelve languages, and a wide assortment of technical and non-technical noises.
We demonstrate use of this corpus for music/speech discrimination on Broadcast
news and VAD for speaker identification.
|
2015-10-28T20:59:04Z
| null | null | null | null | null | null | null | null | null | null |
1,511.02283
|
Generation and Comprehension of Unambiguous Object Descriptions
|
['Junhua Mao', 'Jonathan Huang', 'Alexander Toshev', 'Oana Camburu', 'Alan Yuille', 'Kevin Murphy']
|
['cs.CV', 'cs.CL', 'cs.LG', 'cs.RO', 'I.2.6; I.2.7; I.2.10']
|
We propose a method that can generate an unambiguous description (known as a
referring expression) of a specific object or region in an image, and which can
also comprehend or interpret such an expression to infer which object is being
described. We show that our method outperforms previous methods that generate
descriptions of objects without taking into account other potentially ambiguous
objects in the scene. Our model is inspired by recent successes of deep
learning methods for image captioning, but while image captioning is difficult
to evaluate, our task allows for easy objective evaluation. We also present a
new large-scale dataset for referring expressions, based on MS-COCO. We have
released the dataset and a toolbox for visualization and evaluation, see
https://github.com/mjhucla/Google_Refexp_toolbox
|
2015-11-07T02:17:36Z
|
We have released the Google Refexp dataset together with a toolbox
for visualization and evaluation, see
https://github.com/mjhucla/Google_Refexp_toolbox. Camera ready version for
CVPR 2016
| null | null | null | null | null | null | null | null | null |
1,511.03086
|
The CTU Prague Relational Learning Repository
|
['Jan Motl', 'Oliver Schulte']
|
['cs.LG', 'cs.DB', 'I.2.6; H.2.8']
|
The aim of the Prague Relational Learning Repository is to support machine
learning research with multi-relational data. The repository currently contains
148 SQL databases hosted on a public MySQL server located at
https://relational.fel.cvut.cz. The server is provided by the Czech Technical
University (CTU). A searchable meta-database provides metadata (e.g., the
number of tables in the database, the number of rows and columns in the tables,
the number of self-relationships).
|
2015-11-10T12:30:42Z
|
9 pages
| null | null | null | null | null | null | null | null | null |
1,511.06434
|
Unsupervised Representation Learning with Deep Convolutional Generative
Adversarial Networks
|
['Alec Radford', 'Luke Metz', 'Soumith Chintala']
|
['cs.LG', 'cs.CV']
|
In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations.
|
2015-11-19T22:50:32Z
|
Under review as a conference paper at ICLR 2016
| null | null | null | null | null | null | null | null | null |
1,511.06581
|
Dueling Network Architectures for Deep Reinforcement Learning
|
['Ziyu Wang', 'Tom Schaul', 'Matteo Hessel', 'Hado van Hasselt', 'Marc Lanctot', 'Nando de Freitas']
|
['cs.LG']
|
In recent years there have been many successes of using deep representations
in reinforcement learning. Still, many of these applications use conventional
architectures, such as convolutional networks, LSTMs, or auto-encoders. In this
paper, we present a new neural network architecture for model-free
reinforcement learning. Our dueling network represents two separate estimators:
one for the state value function and one for the state-dependent action
advantage function. The main benefit of this factoring is to generalize
learning across actions without imposing any change to the underlying
reinforcement learning algorithm. Our results show that this architecture leads
to better policy evaluation in the presence of many similar-valued actions.
Moreover, the dueling architecture enables our RL agent to outperform the
state-of-the-art on the Atari 2600 domain.
|
2015-11-20T13:07:54Z
|
15 pages, 5 figures, and 5 tables
| null | null | null | null | null | null | null | null | null |
1,511.09207
|
Incidental Scene Text Understanding: Recent Progresses on ICDAR 2015
Robust Reading Competition Challenge 4
|
['Cong Yao', 'Jianan Wu', 'Xinyu Zhou', 'Chi Zhang', 'Shuchang Zhou', 'Zhimin Cao', 'Qi Yin']
|
['cs.CV']
|
Different from focused texts present in natural images, which are captured
with user's intention and intervention, incidental texts usually exhibit much
more diversity, variability and complexity, thus posing significant
difficulties and challenges for scene text detection and recognition
algorithms. The ICDAR 2015 Robust Reading Competition Challenge 4 was launched
to assess the performance of existing scene text detection and recognition
methods on incidental texts as well as to stimulate novel ideas and solutions.
This report is dedicated to briefly introduce our strategies for this
challenging problem and compare them with prior arts in this field.
|
2015-11-30T09:08:02Z
|
3 pages, 2 figures, 5 tables
| null | null | null | null | null | null | null | null | null |
1,512.00567
|
Rethinking the Inception Architecture for Computer Vision
|
['Christian Szegedy', 'Vincent Vanhoucke', 'Sergey Ioffe', 'Jonathon Shlens', 'Zbigniew Wojna']
|
['cs.CV']
|
Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set.
|
2015-12-02T03:44:38Z
| null | null | null | null | null | null | null | null | null | null |
1,512.02134
|
A Large Dataset to Train Convolutional Networks for Disparity, Optical
Flow, and Scene Flow Estimation
|
['Nikolaus Mayer', 'Eddy Ilg', 'Philip Häusser', 'Philipp Fischer', 'Daniel Cremers', 'Alexey Dosovitskiy', 'Thomas Brox']
|
['cs.CV', 'cs.LG', 'stat.ML', 'I.2.6; I.2.10; I.4.8']
|
Recent work has shown that optical flow estimation can be formulated as a
supervised learning task and can be successfully solved with convolutional
networks. Training of the so-called FlowNet was enabled by a large
synthetically generated dataset. The present paper extends the concept of
optical flow estimation via convolutional networks to disparity and scene flow
estimation. To this end, we propose three synthetic stereo video datasets with
sufficient realism, variation, and size to successfully train large networks.
Our datasets are the first large-scale datasets to enable training and
evaluating scene flow methods. Besides the datasets, we present a convolutional
network for real-time disparity estimation that provides state-of-the-art
results. By combining a flow and disparity estimation network and training it
jointly, we demonstrate the first scene flow estimation with a convolutional
network.
|
2015-12-07T17:35:00Z
|
Includes supplementary material
| null |
10.1109/CVPR.2016.438
|
A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation
|
['N. Mayer', 'Eddy Ilg', 'Philip Häusser', 'P. Fischer', 'D. Cremers', 'Alexey Dosovitskiy', 'T. Brox']
| 2,015
|
Computer Vision and Pattern Recognition
| 2,656
| 30
|
['Computer Science', 'Mathematics']
|
1,512.02325
|
SSD: Single Shot MultiBox Detector
|
['Wei Liu', 'Dragomir Anguelov', 'Dumitru Erhan', 'Christian Szegedy', 'Scott Reed', 'Cheng-Yang Fu', 'Alexander C. Berg']
|
['cs.CV']
|
We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd .
|
2015-12-08T04:46:38Z
|
ECCV 2016
| null |
10.1007/978-3-319-46448-0_2
| null | null | null | null | null | null | null |
1,512.03385
|
Deep Residual Learning for Image Recognition
|
['Kaiming He', 'Xiangyu Zhang', 'Shaoqing Ren', 'Jian Sun']
|
['cs.CV']
|
Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation.
|
2015-12-10T19:51:55Z
|
Tech report
| null | null | null | null | null | null | null | null | null |
1,602.00134
|
Convolutional Pose Machines
|
['Shih-En Wei', 'Varun Ramakrishna', 'Takeo Kanade', 'Yaser Sheikh']
|
['cs.CV']
|
Pose Machines provide a sequential prediction framework for learning rich
implicit spatial models. In this work we show a systematic design for how
convolutional networks can be incorporated into the pose machine framework for
learning image features and image-dependent spatial models for the task of pose
estimation. The contribution of this paper is to implicitly model long-range
dependencies between variables in structured prediction tasks such as
articulated pose estimation. We achieve this by designing a sequential
architecture composed of convolutional networks that directly operate on belief
maps from previous stages, producing increasingly refined estimates for part
locations, without the need for explicit graphical model-style inference. Our
approach addresses the characteristic difficulty of vanishing gradients during
training by providing a natural learning objective function that enforces
intermediate supervision, thereby replenishing back-propagated gradients and
conditioning the learning procedure. We demonstrate state-of-the-art
performance and outperform competing methods on standard benchmarks including
the MPII, LSP, and FLIC datasets.
|
2016-01-30T16:15:28Z
|
camera ready
| null | null | null | null | null | null | null | null | null |
1,602.00763
|
Simple Online and Realtime Tracking
|
['Alex Bewley', 'Zongyuan Ge', 'Lionel Ott', 'Fabio Ramos', 'Ben Upcroft']
|
['cs.CV']
|
This paper explores a pragmatic approach to multiple object tracking where
the main focus is to associate objects efficiently for online and realtime
applications. To this end, detection quality is identified as a key factor
influencing tracking performance, where changing the detector can improve
tracking by up to 18.9%. Despite only using a rudimentary combination of
familiar techniques such as the Kalman Filter and Hungarian algorithm for the
tracking components, this approach achieves an accuracy comparable to
state-of-the-art online trackers. Furthermore, due to the simplicity of our
tracking method, the tracker updates at a rate of 260 Hz which is over 20x
faster than other state-of-the-art trackers.
|
2016-02-02T01:39:28Z
|
Presented at ICIP 2016, code is available at
https://github.com/abewley/sort
| null |
10.1109/ICIP.2016.7533003
|
Simple online and realtime tracking
|
['A. Bewley', 'ZongYuan Ge', 'Lionel Ott', 'F. Ramos', 'B. Upcroft']
| 2,016
|
International Conference on Information Photonics
| 3,127
| 26
|
['Computer Science']
|
1,602.02355
|
Hyperparameter optimization with approximate gradient
|
['Fabian Pedregosa']
|
['stat.ML', 'cs.LG', 'math.OC']
|
Most models in machine learning contain at least one hyperparameter to
control for model complexity. Choosing an appropriate set of hyperparameters is
both crucial in terms of model accuracy and computationally challenging. In
this work we propose an algorithm for the optimization of continuous
hyperparameters using inexact gradient information. An advantage of this method
is that hyperparameters can be updated before model parameters have fully
converged. We also give sufficient conditions for the global convergence of
this method, based on regularity conditions of the involved functions and
summability of errors. Finally, we validate the empirical performance of this
method on the estimation of regularization constants of L2-regularized logistic
regression and kernel Ridge regression. Empirical benchmarks indicate that our
approach is highly competitive with respect to state of the art methods.
|
2016-02-07T10:37:13Z
|
Fixes error in proof of Theorem 2
| null | null | null | null | null | null | null | null | null |
1,602.02644
|
Generating Images with Perceptual Similarity Metrics based on Deep
Networks
|
['Alexey Dosovitskiy', 'Thomas Brox']
|
['cs.LG', 'cs.CV', 'cs.NE']
|
Image-generating machine learning models are typically trained with loss
functions based on distance in the image space. This often leads to
over-smoothed results. We propose a class of loss functions, which we call deep
perceptual similarity metrics (DeePSiM), that mitigate this problem. Instead of
computing distances in the image space, we compute distances between image
features extracted by deep neural networks. This metric better reflects
perceptually similarity of images and thus leads to better results. We show
three applications: autoencoder training, a modification of a variational
autoencoder, and inversion of deep convolutional networks. In all cases, the
generated images look sharp and resemble natural images.
|
2016-02-08T16:50:28Z
|
minor corrections
| null | null | null | null | null | null | null | null | null |
1,602.03012
|
EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic
Videos
|
['Andru P. Twinanda', 'Sherif Shehata', 'Didier Mutter', 'Jacques Marescaux', 'Michel de Mathelin', 'Nicolas Padoy']
|
['cs.CV']
|
Surgical workflow recognition has numerous potential medical applications,
such as the automatic indexing of surgical video databases and the optimization
of real-time operating room scheduling, among others. As a result, phase
recognition has been studied in the context of several kinds of surgeries, such
as cataract, neurological, and laparoscopic surgeries. In the literature, two
types of features are typically used to perform this task: visual features and
tool usage signals. However, the visual features used are mostly handcrafted.
Furthermore, the tool usage signals are usually collected via a manual
annotation process or by using additional equipment. In this paper, we propose
a novel method for phase recognition that uses a convolutional neural network
(CNN) to automatically learn features from cholecystectomy videos and that
relies uniquely on visual information. In previous studies, it has been shown
that the tool signals can provide valuable information in performing the phase
recognition task. Thus, we present a novel CNN architecture, called EndoNet,
that is designed to carry out the phase recognition and tool presence detection
tasks in a multi-task manner. To the best of our knowledge, this is the first
work proposing to use a CNN for multiple recognition tasks on laparoscopic
videos. Extensive experimental comparisons to other methods show that EndoNet
yields state-of-the-art results for both tasks.
|
2016-02-09T14:58:12Z
|
Video: https://www.youtube.com/watch?v=6v0NWrFOUUM
| null | null | null | null | null | null | null | null | null |
1,602.06023
|
Abstractive Text Summarization Using Sequence-to-Sequence RNNs and
Beyond
|
['Ramesh Nallapati', 'Bowen Zhou', 'Cicero Nogueira dos santos', 'Caglar Gulcehre', 'Bing Xiang']
|
['cs.CL']
|
In this work, we model abstractive text summarization using Attentional
Encoder-Decoder Recurrent Neural Networks, and show that they achieve
state-of-the-art performance on two different corpora. We propose several novel
models that address critical problems in summarization that are not adequately
modeled by the basic architecture, such as modeling key-words, capturing the
hierarchy of sentence-to-word structure, and emitting words that are rare or
unseen at training time. Our work shows that many of our proposed models
contribute to further improvement in performance. We also propose a new dataset
consisting of multi-sentence summaries, and establish performance benchmarks
for further research.
|
2016-02-19T02:04:18Z
| null |
The SIGNLL Conference on Computational Natural Language Learning
(CoNLL), 2016
| null |
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
|
['Ramesh Nallapati', 'Bowen Zhou', 'C. D. Santos', 'Çaglar Gülçehre', 'Bing Xiang']
| 2,016
|
Conference on Computational Natural Language Learning
| 2,569
| 34
|
['Computer Science']
|
1,602.07261
|
Inception-v4, Inception-ResNet and the Impact of Residual Connections on
Learning
|
['Christian Szegedy', 'Sergey Ioffe', 'Vincent Vanhoucke', 'Alex Alemi']
|
['cs.CV']
|
Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge
|
2016-02-23T18:44:39Z
| null | null | null |
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
|
['Christian Szegedy', 'Sergey Ioffe', 'Vincent Vanhoucke', 'Alexander A. Alemi']
| 2,016
|
AAAI Conference on Artificial Intelligence
| 14,324
| 23
|
['Computer Science']
|
1,602.0736
|
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
model size
|
['Forrest N. Iandola', 'Song Han', 'Matthew W. Moskewicz', 'Khalid Ashraf', 'William J. Dally', 'Kurt Keutzer']
|
['cs.CV', 'cs.AI']
|
Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet
|
2016-02-24T00:09:45Z
|
In ICLR Format
| null | null |
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
|
['F. Iandola', 'Matthew W. Moskewicz', 'Khalid Ashraf', 'Song Han', 'W. Dally', 'K. Keutzer']
| 2,016
|
arXiv.org
| 7,522
| 52
|
['Computer Science']
|
1,603.0136
|
Neural Architectures for Named Entity Recognition
|
['Guillaume Lample', 'Miguel Ballesteros', 'Sandeep Subramanian', 'Kazuya Kawakami', 'Chris Dyer']
|
['cs.CL']
|
State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers.
|
2016-03-04T06:36:29Z
|
Proceedings of NAACL 2016
| null | null | null | null | null | null | null | null | null |
1,603.05027
|
Identity Mappings in Deep Residual Networks
|
['Kaiming He', 'Xiangyu Zhang', 'Shaoqing Ren', 'Jian Sun']
|
['cs.CV', 'cs.LG']
|
Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers
|
2016-03-16T10:53:56Z
|
ECCV 2016 camera-ready
| null | null | null | null | null | null | null | null | null |
1,603.07396
|
A Diagram Is Worth A Dozen Images
|
['Aniruddha Kembhavi', 'Mike Salvato', 'Eric Kolve', 'Minjoon Seo', 'Hannaneh Hajishirzi', 'Ali Farhadi']
|
['cs.CV', 'cs.AI']
|
Diagrams are common tools for representing complex concepts, relationships
and events, often when it would be difficult to portray the same information
with natural images. Understanding natural images has been extensively studied
in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning,
the challenging task of identifying the structure of a diagram and the
semantics of its constituents and their relationships. We introduce Diagram
Parse Graphs (DPG) as our representation to model the structure of diagrams. We
define syntactic parsing of diagrams as learning to infer DPGs for diagrams and
study semantic interpretation and reasoning of diagrams in the context of
diagram question answering. We devise an LSTM-based method for syntactic
parsing of diagrams and introduce a DPG-based attention model for diagram
question answering. We compile a new dataset of diagrams with exhaustive
annotations of constituents and relationships for over 5,000 diagrams and
15,000 questions and answers. Our results show the significance of our models
for syntactic parsing and question answering in diagrams using DPGs.
|
2016-03-24T00:02:58Z
| null | null | null | null | null | null | null | null | null | null |
1,603.08155
|
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
|
['Justin Johnson', 'Alexandre Alahi', 'Li Fei-Fei']
|
['cs.CV', 'cs.LG']
|
We consider image transformation problems, where an input image is
transformed into an output image. Recent methods for such problems typically
train feed-forward convolutional neural networks using a \emph{per-pixel} loss
between the output and ground-truth images. Parallel work has shown that
high-quality images can be generated by defining and optimizing
\emph{perceptual} loss functions based on high-level features extracted from
pretrained networks. We combine the benefits of both approaches, and propose
the use of perceptual loss functions for training feed-forward networks for
image transformation tasks. We show results on image style transfer, where a
feed-forward network is trained to solve the optimization problem proposed by
Gatys et al in real-time. Compared to the optimization-based method, our
network gives similar qualitative results but is three orders of magnitude
faster. We also experiment with single-image super-resolution, where replacing
a per-pixel loss with a perceptual loss gives visually pleasing results.
|
2016-03-27T01:04:27Z
| null | null | null | null | null | null | null | null | null | null |
1,603.08983
|
Adaptive Computation Time for Recurrent Neural Networks
|
['Alex Graves']
|
['cs.NE']
|
This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data.
|
2016-03-29T22:09:00Z
| null | null | null |
Adaptive Computation Time for Recurrent Neural Networks
|
['Alex Graves']
| 2,016
|
arXiv.org
| 552
| 38
|
['Computer Science']
|
1,604.06174
|
Training Deep Nets with Sublinear Memory Cost
|
['Tianqi Chen', 'Bing Xu', 'Chiyuan Zhang', 'Carlos Guestrin']
|
['cs.LG']
|
We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences.
|
2016-04-21T04:15:27Z
| null | null | null | null | null | null | null | null | null | null |
1,605.0317
|
DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation
Model
|
['Eldar Insafutdinov', 'Leonid Pishchulin', 'Bjoern Andres', 'Mykhaylo Andriluka', 'Bernt Schiele']
|
['cs.CV']
|
The goal of this paper is to advance the state-of-the-art of articulated pose
estimation in scenes with multiple people. To that end we contribute on three
fronts. We propose (1) improved body part detectors that generate effective
bottom-up proposals for body parts; (2) novel image-conditioned pairwise terms
that allow to assemble the proposals into a variable number of consistent body
part configurations; and (3) an incremental optimization strategy that explores
the search space more efficiently thus leading both to better performance and
significant speed-up factors. Evaluation is done on two single-person and two
multi-person pose estimation benchmarks. The proposed approach significantly
outperforms best known multi-person pose estimation results while demonstrating
competitive performance on the task of single person pose estimation. Models
and code available at http://pose.mpi-inf.mpg.de
|
2016-05-10T19:49:40Z
|
ECCV'16. High-res version at
https://www.d2.mpi-inf.mpg.de/sites/default/files/insafutdinov16arxiv.pdf
| null | null | null | null | null | null | null | null | null |
1,605.07146
|
Wide Residual Networks
|
['Sergey Zagoruyko', 'Nikos Komodakis']
|
['cs.CV', 'cs.LG', 'cs.NE']
|
Deep residual networks were shown to be able to scale up to thousands of
layers and still have improving performance. However, each fraction of a
percent of improved accuracy costs nearly doubling the number of layers, and so
training very deep residual networks has a problem of diminishing feature
reuse, which makes these networks very slow to train. To tackle these problems,
in this paper we conduct a detailed experimental study on the architecture of
ResNet blocks, based on which we propose a novel architecture where we decrease
depth and increase width of residual networks. We call the resulting network
structures wide residual networks (WRNs) and show that these are far superior
over their commonly used thin and very deep counterparts. For example, we
demonstrate that even a simple 16-layer-deep wide residual network outperforms
in accuracy and efficiency all previous deep residual networks, including
thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR,
SVHN, COCO, and significant improvements on ImageNet. Our code and models are
available at https://github.com/szagoruyko/wide-residual-networks
|
2016-05-23T19:27:13Z
| null | null | null |
Wide Residual Networks
|
['Sergey Zagoruyko', 'N. Komodakis']
| 2,016
|
British Machine Vision Conference
| 8,017
| 32
|
['Computer Science']
|
1,606.00652
|
Death and Suicide in Universal Artificial Intelligence
|
['Jarryd Martin', 'Tom Everitt', 'Marcus Hutter']
|
['cs.AI', 'I.2.0; I.2.6']
|
Reinforcement learning (RL) is a general paradigm for studying intelligent
behaviour, with applications ranging from artificial intelligence to psychology
and economics. AIXI is a universal solution to the RL problem; it can learn any
computable environment. A technical subtlety of AIXI is that it is defined
using a mixture over semimeasures that need not sum to 1, rather than over
proper probability measures. In this work we argue that the shortfall of a
semimeasure can naturally be interpreted as the agent's estimate of the
probability of its death. We formally define death for generally intelligent
agents like AIXI, and prove a number of related theorems about their behaviour.
Notable discoveries include that agent behaviour can change radically under
positive linear transformations of the reward signal (from suicidal to
dogmatically self-preserving), and that the agent's posterior belief that it
will survive increases over time.
|
2016-06-02T12:48:39Z
|
Conference: Artificial General Intelligence (AGI) 2016 13 pages, 2
figures
| null | null |
Death and Suicide in Universal Artificial Intelligence
|
['Jarryd Martin', 'Tom Everitt', 'Marcus Hutter']
| 2,016
|
Artificial General Intelligence
| 21
| 10
|
['Computer Science']
|
1,606.00915
|
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets,
Atrous Convolution, and Fully Connected CRFs
|
['Liang-Chieh Chen', 'George Papandreou', 'Iasonas Kokkinos', 'Kevin Murphy', 'Alan L. Yuille']
|
['cs.CV']
|
In this work we address the task of semantic image segmentation with Deep
Learning and make three main contributions that are experimentally shown to
have substantial practical merit. First, we highlight convolution with
upsampled filters, or 'atrous convolution', as a powerful tool in dense
prediction tasks. Atrous convolution allows us to explicitly control the
resolution at which feature responses are computed within Deep Convolutional
Neural Networks. It also allows us to effectively enlarge the field of view of
filters to incorporate larger context without increasing the number of
parameters or the amount of computation. Second, we propose atrous spatial
pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP
probes an incoming convolutional feature layer with filters at multiple
sampling rates and effective fields-of-views, thus capturing objects as well as
image context at multiple scales. Third, we improve the localization of object
boundaries by combining methods from DCNNs and probabilistic graphical models.
The commonly deployed combination of max-pooling and downsampling in DCNNs
achieves invariance but has a toll on localization accuracy. We overcome this
by combining the responses at the final DCNN layer with a fully connected
Conditional Random Field (CRF), which is shown both qualitatively and
quantitatively to improve localization performance. Our proposed "DeepLab"
system sets the new state-of-art at the PASCAL VOC-2012 semantic image
segmentation task, reaching 79.7% mIOU in the test set, and advances the
results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and
Cityscapes. All of our code is made publicly available online.
|
2016-06-02T21:52:21Z
|
Accepted by TPAMI
| null | null | null | null | null | null | null | null | null |
1,606.02147
|
ENet: A Deep Neural Network Architecture for Real-Time Semantic
Segmentation
|
['Adam Paszke', 'Abhishek Chaurasia', 'Sangpil Kim', 'Eugenio Culurciello']
|
['cs.CV']
|
The ability to perform pixel-wise semantic segmentation in real-time is of
paramount importance in mobile applications. Recent deep neural networks aimed
at this task have the disadvantage of requiring a large number of floating
point operations and have long run-times that hinder their usability. In this
paper, we propose a novel deep neural network architecture named ENet
(efficient neural network), created specifically for tasks requiring low
latency operation. ENet is up to 18$\times$ faster, requires 75$\times$ less
FLOPs, has 79$\times$ less parameters, and provides similar or better accuracy
to existing models. We have tested it on CamVid, Cityscapes and SUN datasets
and report on comparisons with existing state-of-the-art methods, and the
trade-offs between accuracy and processing time of a network. We present
performance measurements of the proposed architecture on embedded systems and
suggest possible software improvements that could make ENet even faster.
|
2016-06-07T14:09:27Z
| null | null | null | null | null | null | null | null | null | null |
1,606.03498
|
Improved Techniques for Training GANs
|
['Tim Salimans', 'Ian Goodfellow', 'Wojciech Zaremba', 'Vicki Cheung', 'Alec Radford', 'Xi Chen']
|
['cs.LG', 'cs.CV', 'cs.NE']
|
We present a variety of new architectural features and training procedures
that we apply to the generative adversarial networks (GANs) framework. We focus
on two applications of GANs: semi-supervised learning, and the generation of
images that humans find visually realistic. Unlike most work on generative
models, our primary goal is not to train a model that assigns high likelihood
to test data, nor do we require the model to be able to learn well without
using any labels. Using our new techniques, we achieve state-of-the-art results
in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated
images are of high quality as confirmed by a visual Turing test: our model
generates MNIST samples that humans cannot distinguish from real data, and
CIFAR-10 samples that yield a human error rate of 21.3%. We also present
ImageNet samples with unprecedented resolution and show that our methods enable
the model to learn recognizable features of ImageNet classes.
|
2016-06-10T22:53:35Z
| null | null | null | null | null | null | null | null | null | null |
1,606.04853
|
The ND-IRIS-0405 Iris Image Dataset
|
['Kevin W. Bowyer', 'Patrick J. Flynn']
|
['cs.CV']
|
The Computer Vision Research Lab at the University of Notre Dame began
collecting iris images in the spring semester of 2004. The initial data
collections used an LG 2200 iris imaging system for image acquisition. Image
datasets acquired in 2004-2005 at Notre Dame with this LG 2200 have been used
in the ICE 2005 and ICE 2006 iris biometric evaluations. The ICE 2005 iris
image dataset has been distributed to over 100 research groups around the
world. The purpose of this document is to describe the content of the
ND-IRIS-0405 iris image dataset. This dataset is a superset of the iris image
datasets used in ICE 2005 and ICE 2006. The ND 2004-2005 iris image dataset
contains 64,980 images corresponding to 356 unique subjects, and 712 unique
irises. The age range of the subjects is 18 to 75 years old. 158 of the
subjects are female, and 198 are male. 250 of the subjects are Caucasian, 82
are Asian, and 24 are other ethnicities.
|
2016-06-15T16:40:51Z
|
13 pages, 8 figures
| null | null | null | null | null | null | null | null | null |
1,606.0525
|
SQuAD: 100,000+ Questions for Machine Comprehension of Text
|
['Pranav Rajpurkar', 'Jian Zhang', 'Konstantin Lopyrev', 'Percy Liang']
|
['cs.CL']
|
We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com
|
2016-06-16T16:36:00Z
|
To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP)
| null | null | null | null | null | null | null | null | null |
1,606.0665
|
3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
|
['Özgün Çiçek', 'Ahmed Abdulkadir', 'Soeren S. Lienkamp', 'Thomas Brox', 'Olaf Ronneberger']
|
['cs.CV']
|
This paper introduces a network for volumetric segmentation that learns from
sparsely annotated volumetric images. We outline two attractive use cases of
this method: (1) In a semi-automated setup, the user annotates some slices in
the volume to be segmented. The network learns from these sparse annotations
and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume
that a representative, sparsely annotated training set exists. Trained on this
data set, the network densely segments new volumetric images. The proposed
network extends the previous u-net architecture from Ronneberger et al. by
replacing all 2D operations with their 3D counterparts. The implementation
performs on-the-fly elastic deformations for efficient data augmentation during
training. It is trained end-to-end from scratch, i.e., no pre-trained network
is required. We test the performance of the proposed method on a complex,
highly variable 3D structure, the Xenopus kidney, and achieve good results for
both use cases.
|
2016-06-21T16:42:20Z
|
Conditionally accepted for MICCAI 2016
| null | null | null | null | null | null | null | null | null |
1,607.00653
|
node2vec: Scalable Feature Learning for Networks
|
['Aditya Grover', 'Jure Leskovec']
|
['cs.SI', 'cs.LG', 'stat.ML']
|
Prediction tasks over nodes and edges in networks require careful effort in
engineering features used by learning algorithms. Recent research in the
broader field of representation learning has led to significant progress in
automating prediction by learning the features themselves. However, present
feature learning approaches are not expressive enough to capture the diversity
of connectivity patterns observed in networks. Here we propose node2vec, an
algorithmic framework for learning continuous feature representations for nodes
in networks. In node2vec, we learn a mapping of nodes to a low-dimensional
space of features that maximizes the likelihood of preserving network
neighborhoods of nodes. We define a flexible notion of a node's network
neighborhood and design a biased random walk procedure, which efficiently
explores diverse neighborhoods. Our algorithm generalizes prior work which is
based on rigid notions of network neighborhoods, and we argue that the added
flexibility in exploring neighborhoods is the key to learning richer
representations. We demonstrate the efficacy of node2vec over existing
state-of-the-art techniques on multi-label classification and link prediction
in several real-world networks from diverse domains. Taken together, our work
represents a new way for efficiently learning state-of-the-art task-independent
representations in complex networks.
|
2016-07-03T16:09:30Z
|
In Proceedings of the 22nd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, 2016
| null | null |
node2vec: Scalable Feature Learning for Networks
|
['Aditya Grover', 'J. Leskovec']
| 2,016
|
Knowledge Discovery and Data Mining
| 10,974
| 47
|
['Computer Science', 'Mathematics', 'Medicine']
|
1,607.01759
|
Bag of Tricks for Efficient Text Classification
|
['Armand Joulin', 'Edouard Grave', 'Piotr Bojanowski', 'Tomas Mikolov']
|
['cs.CL']
|
This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute.
|
2016-07-06T19:40:15Z
| null | null | null | null | null | null | null | null | null | null |
1,607.04606
|
Enriching Word Vectors with Subword Information
|
['Piotr Bojanowski', 'Edouard Grave', 'Armand Joulin', 'Tomas Mikolov']
|
['cs.CL', 'cs.LG']
|
Continuous word representations, trained on large unlabeled corpora are
useful for many natural language processing tasks. Popular models that learn
such representations ignore the morphology of words, by assigning a distinct
vector to each word. This is a limitation, especially for languages with large
vocabularies and many rare words. In this paper, we propose a new approach
based on the skipgram model, where each word is represented as a bag of
character $n$-grams. A vector representation is associated to each character
$n$-gram; words being represented as the sum of these representations. Our
method is fast, allowing to train models on large corpora quickly and allows us
to compute word representations for words that did not appear in the training
data. We evaluate our word representations on nine different languages, both on
word similarity and analogy tasks. By comparing to recently proposed
morphological word representations, we show that our vectors achieve
state-of-the-art performance on these tasks.
|
2016-07-15T18:27:55Z
|
Accepted to TACL. The two first authors contributed equally
| null | null | null | null | null | null | null | null | null |
1,607.0645
|
Layer Normalization
|
['Jimmy Lei Ba', 'Jamie Ryan Kiros', 'Geoffrey E. Hinton']
|
['stat.ML', 'cs.LG']
|
Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques.
|
2016-07-21T19:57:52Z
| null | null | null | null | null | null | null | null | null | null |
1,608.00272
|
Modeling Context in Referring Expressions
|
['Licheng Yu', 'Patrick Poirson', 'Shan Yang', 'Alexander C. Berg', 'Tamara L. Berg']
|
['cs.CV', 'cs.CL']
|
Humans refer to objects in their environments all the time, especially in
dialogue with other people. We explore generating and comprehending natural
language referring expressions for objects in images. In particular, we focus
on incorporating better measures of visual context into referring expression
models and find that visual comparison to other objects within an image helps
improve performance significantly. We also develop methods to tie the language
generation process together, so that we generate expressions for all objects of
a particular category jointly. Evaluation on three recent datasets - RefCOCO,
RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring
expression generation and comprehension.
|
2016-07-31T22:21:42Z
|
19 pages, 6 figures, in ECCV 2016; authors, references and
acknowledgement updated
| null | null | null | null | null | null | null | null | null |
1,608.06993
|
Densely Connected Convolutional Networks
|
['Gao Huang', 'Zhuang Liu', 'Laurens van der Maaten', 'Kilian Q. Weinberger']
|
['cs.CV', 'cs.LG']
|
Recent work has shown that convolutional networks can be substantially
deeper, more accurate, and efficient to train if they contain shorter
connections between layers close to the input and those close to the output. In
this paper, we embrace this observation and introduce the Dense Convolutional
Network (DenseNet), which connects each layer to every other layer in a
feed-forward fashion. Whereas traditional convolutional networks with L layers
have L connections - one between each layer and its subsequent layer - our
network has L(L+1)/2 direct connections. For each layer, the feature-maps of
all preceding layers are used as inputs, and its own feature-maps are used as
inputs into all subsequent layers. DenseNets have several compelling
advantages: they alleviate the vanishing-gradient problem, strengthen feature
propagation, encourage feature reuse, and substantially reduce the number of
parameters. We evaluate our proposed architecture on four highly competitive
object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet).
DenseNets obtain significant improvements over the state-of-the-art on most of
them, whilst requiring less computation to achieve high performance. Code and
pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
|
2016-08-25T00:44:55Z
|
CVPR 2017
| null | null | null | null | null | null | null | null | null |
1,609.04802
|
Photo-Realistic Single Image Super-Resolution Using a Generative
Adversarial Network
|
['Christian Ledig', 'Lucas Theis', 'Ferenc Huszar', 'Jose Caballero', 'Andrew Cunningham', 'Alejandro Acosta', 'Andrew Aitken', 'Alykhan Tejani', 'Johannes Totz', 'Zehan Wang', 'Wenzhe Shi']
|
['cs.CV', 'stat.ML']
|
Despite the breakthroughs in accuracy and speed of single image
super-resolution using faster and deeper convolutional neural networks, one
central problem remains largely unsolved: how do we recover the finer texture
details when we super-resolve at large upscaling factors? The behavior of
optimization-based super-resolution methods is principally driven by the choice
of the objective function. Recent work has largely focused on minimizing the
mean squared reconstruction error. The resulting estimates have high peak
signal-to-noise ratios, but they are often lacking high-frequency details and
are perceptually unsatisfying in the sense that they fail to match the fidelity
expected at the higher resolution. In this paper, we present SRGAN, a
generative adversarial network (GAN) for image super-resolution (SR). To our
knowledge, it is the first framework capable of inferring photo-realistic
natural images for 4x upscaling factors. To achieve this, we propose a
perceptual loss function which consists of an adversarial loss and a content
loss. The adversarial loss pushes our solution to the natural image manifold
using a discriminator network that is trained to differentiate between the
super-resolved images and original photo-realistic images. In addition, we use
a content loss motivated by perceptual similarity instead of similarity in
pixel space. Our deep residual network is able to recover photo-realistic
textures from heavily downsampled images on public benchmarks. An extensive
mean-opinion-score (MOS) test shows hugely significant gains in perceptual
quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of
the original high-resolution images than to those obtained with any
state-of-the-art method.
|
2016-09-15T19:53:07Z
|
19 pages, 15 figures, 2 tables, accepted for oral presentation at
CVPR, main paper + some supplementary material
| null | null | null | null | null | null | null | null | null |
1,609.05158
|
Real-Time Single Image and Video Super-Resolution Using an Efficient
Sub-Pixel Convolutional Neural Network
|
['Wenzhe Shi', 'Jose Caballero', 'Ferenc Huszár', 'Johannes Totz', 'Andrew P. Aitken', 'Rob Bishop', 'Daniel Rueckert', 'Zehan Wang']
|
['cs.CV', 'stat.ML']
|
Recently, several models based on deep neural networks have achieved great
success in terms of both reconstruction accuracy and computational performance
for single image super-resolution. In these methods, the low resolution (LR)
input image is upscaled to the high resolution (HR) space using a single
filter, commonly bicubic interpolation, before reconstruction. This means that
the super-resolution (SR) operation is performed in HR space. We demonstrate
that this is sub-optimal and adds computational complexity. In this paper, we
present the first convolutional neural network (CNN) capable of real-time SR of
1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN
architecture where the feature maps are extracted in the LR space. In addition,
we introduce an efficient sub-pixel convolution layer which learns an array of
upscaling filters to upscale the final LR feature maps into the HR output. By
doing so, we effectively replace the handcrafted bicubic filter in the SR
pipeline with more complex upscaling filters specifically trained for each
feature map, whilst also reducing the computational complexity of the overall
SR operation. We evaluate the proposed approach using images and videos from
publicly available datasets and show that it performs significantly better
(+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster
than previous CNN-based methods.
|
2016-09-16T17:58:14Z
|
CVPR 2016 paper with updated affiliations and supplemental material,
fixed typo in equation 4
| null | null | null | null | null | null | null | null | null |
1,609.07843
|
Pointer Sentinel Mixture Models
|
['Stephen Merity', 'Caiming Xiong', 'James Bradbury', 'Richard Socher']
|
['cs.CL', 'cs.AI']
|
Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus.
|
2016-09-26T04:06:13Z
| null | null | null | null | null | null | null | null | null | null |
1,609.08144
|
Google's Neural Machine Translation System: Bridging the Gap between
Human and Machine Translation
|
['Yonghui Wu', 'Mike Schuster', 'Zhifeng Chen', 'Quoc V. Le', 'Mohammad Norouzi', 'Wolfgang Macherey', 'Maxim Krikun', 'Yuan Cao', 'Qin Gao', 'Klaus Macherey', 'Jeff Klingner', 'Apurva Shah', 'Melvin Johnson', 'Xiaobing Liu', 'Łukasz Kaiser', 'Stephan Gouws', 'Yoshikiyo Kato', 'Taku Kudo', 'Hideto Kazawa', 'Keith Stevens', 'George Kurian', 'Nishant Patil', 'Wei Wang', 'Cliff Young', 'Jason Smith', 'Jason Riesa', 'Alex Rudnick', 'Oriol Vinyals', 'Greg Corrado', 'Macduff Hughes', 'Jeffrey Dean']
|
['cs.CL', 'cs.AI', 'cs.LG']
|
Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system.
|
2016-09-26T19:59:55Z
| null | null | null | null | null | null | null | null | null | null |
1,610.02357
|
Xception: Deep Learning with Depthwise Separable Convolutions
|
['François Chollet']
|
['cs.CV']
|
We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters.
|
2016-10-07T17:51:51Z
| null | null | null | null | null | null | null | null | null | null |
1,610.02424
|
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence
Models
|
['Ashwin K Vijayakumar', 'Michael Cogswell', 'Ramprasath R. Selvaraju', 'Qing Sun', 'Stefan Lee', 'David Crandall', 'Dhruv Batra']
|
['cs.AI', 'cs.CL', 'cs.CV']
|
Neural sequence models are widely used to model time-series data. Equally
ubiquitous is the usage of beam search (BS) as an approximate inference
algorithm to decode output sequences from these models. BS explores the search
space in a greedy left-right fashion retaining only the top-B candidates -
resulting in sequences that differ only slightly from each other. Producing
lists of nearly identical sequences is not only computationally wasteful but
also typically fails to capture the inherent ambiguity of complex AI tasks. To
overcome this problem, we propose Diverse Beam Search (DBS), an alternative to
BS that decodes a list of diverse outputs by optimizing for a
diversity-augmented objective. We observe that our method finds better top-1
solutions by controlling for the exploration and exploitation of the search
space - implying that DBS is a better search algorithm. Moreover, these gains
are achieved with minimal computational or memory over- head as compared to
beam search. To demonstrate the broad applicability of our method, we present
results on image captioning, machine translation and visual question generation
using both standard quantitative metrics and qualitative human studies.
Further, we study the role of diversity for image-grounded language generation
tasks as the complexity of the image changes. We observe that our method
consistently outperforms BS and previously proposed techniques for diverse
decoding from neural sequence models.
|
2016-10-07T20:56:47Z
|
16 pages; accepted at AAAI 2018
| null | null | null | null | null | null | null | null | null |
1,611.01734
|
Deep Biaffine Attention for Neural Dependency Parsing
|
['Timothy Dozat', 'Christopher D. Manning']
|
['cs.CL', 'cs.NE']
|
This paper builds off recent work from Kiperwasser & Goldberg (2016) using
neural attention in a simple graph-based dependency parser. We use a larger but
more thoroughly regularized parser than other recent BiLSTM-based approaches,
with biaffine classifiers to predict arcs and labels. Our parser gets state of
the art or near state of the art performance on standard treebanks for six
different languages, achieving 95.7% UAS and 94.1% LAS on the most popular
English PTB dataset. This makes it the highest-performing graph-based parser on
this benchmark---outperforming Kiperwasser Goldberg (2016) by 1.8% and
2.2%---and comparable to the highest performing transition-based parser
(Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show
which hyperparameter choices had a significant effect on parsing accuracy,
allowing us to achieve large gains over other graph-based approaches.
|
2016-11-06T07:26:38Z
|
Accepted to ICLR 2017; updated with new results and comparison to
more recent models, including current state-of-the-art
| null | null | null | null | null | null | null | null | null |
1,611.022
|
Unsupervised Cross-Domain Image Generation
|
['Yaniv Taigman', 'Adam Polyak', 'Lior Wolf']
|
['cs.CV']
|
We study the problem of transferring a sample in one domain to an analog
sample in another domain. Given two related domains, S and T, we would like to
learn a generative function G that maps an input sample from S to the domain T,
such that the output of a given function f, which accepts inputs in either
domains, would remain unchanged. Other than the function f, the training data
is unsupervised and consist of a set of samples from each domain. The Domain
Transfer Network (DTN) we present employs a compound loss function that
includes a multiclass GAN loss, an f-constancy component, and a regularizing
component that encourages G to map samples from T to themselves. We apply our
method to visual domains including digits and face images and demonstrate its
ability to generate convincing novel images of previously unseen entities,
while preserving their identity.
|
2016-11-07T18:14:57Z
| null | null | null |
Unsupervised Cross-Domain Image Generation
|
['Yaniv Taigman', 'Adam Polyak', 'Lior Wolf']
| 2,016
|
International Conference on Learning Representations
| 1,003
| 30
|
['Computer Science']
|
1,611.04033
|
1.5 billion words Arabic Corpus
|
['Ibrahim Abu El-khair']
|
['cs.CL', 'cs.DL', 'cs.IR']
|
This study is an attempt to build a contemporary linguistic corpus for Arabic
language. The corpus produced, is a text corpus includes more than five million
newspaper articles. It contains over a billion and a half words in total, out
of which, there is about three million unique words. The data were collected
from newspaper articles in ten major news sources from eight Arabic countries,
over a period of fourteen years. The corpus was encoded with two types of
encoding, namely: UTF-8, and Windows CP-1256. Also it was marked with two
mark-up languages, namely: SGML, and XML.
|
2016-11-12T18:41:58Z
| null | null | null |
1.5 billion words Arabic Corpus
|
['I. A. El-Khair']
| 2,016
|
arXiv.org
| 99
| 30
|
['Computer Science']
|
1,611.05431
|
Aggregated Residual Transformations for Deep Neural Networks
|
['Saining Xie', 'Ross Girshick', 'Piotr Dollár', 'Zhuowen Tu', 'Kaiming He']
|
['cs.CV']
|
We present a simple, highly modularized network architecture for image
classification. Our network is constructed by repeating a building block that
aggregates a set of transformations with the same topology. Our simple design
results in a homogeneous, multi-branch architecture that has only a few
hyper-parameters to set. This strategy exposes a new dimension, which we call
"cardinality" (the size of the set of transformations), as an essential factor
in addition to the dimensions of depth and width. On the ImageNet-1K dataset,
we empirically show that even under the restricted condition of maintaining
complexity, increasing cardinality is able to improve classification accuracy.
Moreover, increasing cardinality is more effective than going deeper or wider
when we increase the capacity. Our models, named ResNeXt, are the foundations
of our entry to the ILSVRC 2016 classification task in which we secured 2nd
place. We further investigate ResNeXt on an ImageNet-5K set and the COCO
detection set, also showing better results than its ResNet counterpart. The
code and models are publicly available online.
|
2016-11-16T20:34:42Z
|
Accepted to CVPR 2017. Code and models:
https://github.com/facebookresearch/ResNeXt
| null | null | null | null | null | null | null | null | null |
1,611.06455
|
Time Series Classification from Scratch with Deep Neural Networks: A
Strong Baseline
|
['Zhiguang Wang', 'Weizhong Yan', 'Tim Oates']
|
['cs.LG', 'cs.NE', 'stat.ML']
|
We propose a simple but strong baseline for time series classification from
scratch with deep neural networks. Our proposed baseline models are pure
end-to-end without any heavy preprocessing on the raw data or feature crafting.
The proposed Fully Convolutional Network (FCN) achieves premium performance to
other state-of-the-art approaches and our exploration of the very deep neural
networks with the ResNet structure is also competitive. The global average
pooling in our convolutional model enables the exploitation of the Class
Activation Map (CAM) to find out the contributing region in the raw data for
the specific labels. Our models provides a simple choice for the real world
application and a good starting point for the future research. An overall
analysis is provided to discuss the generalization capability of our models,
learned features, network structures and the classification semantics.
|
2016-11-20T00:34:09Z
| null | null | null | null | null | null | null | null | null | null |
1,611.07004
|
Image-to-Image Translation with Conditional Adversarial Networks
|
['Phillip Isola', 'Jun-Yan Zhu', 'Tinghui Zhou', 'Alexei A. Efros']
|
['cs.CV']
|
We investigate conditional adversarial networks as a general-purpose solution
to image-to-image translation problems. These networks not only learn the
mapping from input image to output image, but also learn a loss function to
train this mapping. This makes it possible to apply the same generic approach
to problems that traditionally would require very different loss formulations.
We demonstrate that this approach is effective at synthesizing photos from
label maps, reconstructing objects from edge maps, and colorizing images, among
other tasks. Indeed, since the release of the pix2pix software associated with
this paper, a large number of internet users (many of them artists) have posted
their own experiments with our system, further demonstrating its wide
applicability and ease of adoption without the need for parameter tweaking. As
a community, we no longer hand-engineer our mapping functions, and this work
suggests we can achieve reasonable results without hand-engineering our loss
functions either.
|
2016-11-21T20:48:16Z
|
Website: https://phillipi.github.io/pix2pix/, CVPR 2017
| null | null |
Image-to-Image Translation with Conditional Adversarial Networks
|
['Phillip Isola', 'Jun-Yan Zhu', 'Tinghui Zhou', 'Alexei A. Efros']
| 2,016
|
Computer Vision and Pattern Recognition
| 19,761
| 70
|
['Computer Science']
|
1,611.07308
|
Variational Graph Auto-Encoders
|
['Thomas N. Kipf', 'Max Welling']
|
['stat.ML', 'cs.LG']
|
We introduce the variational graph auto-encoder (VGAE), a framework for
unsupervised learning on graph-structured data based on the variational
auto-encoder (VAE). This model makes use of latent variables and is capable of
learning interpretable latent representations for undirected graphs. We
demonstrate this model using a graph convolutional network (GCN) encoder and a
simple inner product decoder. Our model achieves competitive results on a link
prediction task in citation networks. In contrast to most existing models for
unsupervised learning on graph-structured data and link prediction, our model
can naturally incorporate node features, which significantly improves
predictive performance on a number of benchmark datasets.
|
2016-11-21T11:37:17Z
|
Bayesian Deep Learning Workshop (NIPS 2016)
| null | null | null | null | null | null | null | null | null |
1,611.0805
|
Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
|
['Zhe Cao', 'Tomas Simon', 'Shih-En Wei', 'Yaser Sheikh']
|
['cs.CV']
|
We present an approach to efficiently detect the 2D pose of multiple people
in an image. The approach uses a nonparametric representation, which we refer
to as Part Affinity Fields (PAFs), to learn to associate body parts with
individuals in the image. The architecture encodes global context, allowing a
greedy bottom-up parsing step that maintains high accuracy while achieving
realtime performance, irrespective of the number of people in the image. The
architecture is designed to jointly learn part locations and their association
via two branches of the same sequential prediction process. Our method placed
first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds
the previous state-of-the-art result on the MPII Multi-Person benchmark, both
in performance and efficiency.
|
2016-11-24T01:58:16Z
|
Accepted as CVPR 2017 Oral. Video result:
https://youtu.be/pW6nZXeWlGM
| null | null |
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
|
['Zhe Cao', 'T. Simon', 'S. Wei', 'Yaser Sheikh']
| 2,016
|
Computer Vision and Pattern Recognition
| 6,570
| 43
|
['Computer Science']
|
1,611.09268
|
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
|
['Payal Bajaj', 'Daniel Campos', 'Nick Craswell', 'Li Deng', 'Jianfeng Gao', 'Xiaodong Liu', 'Rangan Majumder', 'Andrew McNamara', 'Bhaskar Mitra', 'Tri Nguyen', 'Mir Rosenberg', 'Xia Song', 'Alina Stoica', 'Saurabh Tiwary', 'Tong Wang']
|
['cs.CL', 'cs.IR']
|
We introduce a large scale MAchine Reading COmprehension dataset, which we
name MS MARCO. The dataset comprises of 1,010,916 anonymized
questions---sampled from Bing's search query logs---each with a human generated
answer and 182,669 completely human rewritten generated answers. In addition,
the dataset contains 8,841,823 passages---extracted from 3,563,535 web
documents retrieved by Bing---that provide the information necessary for
curating the natural language answers. A question in the MS MARCO dataset may
have multiple answers or no answers at all. Using this dataset, we propose
three different tasks with varying levels of difficulty: (i) predict if a
question is answerable given a set of context passages, and extract and
synthesize the answer as a human would (ii) generate a well-formed answer (if
possible) based on the context passages that can be understood with the
question and passage context, and finally (iii) rank a set of retrieved
passages given a question. The size of the dataset and the fact that the
questions are derived from real user search queries distinguishes MS MARCO from
other well-known publicly available datasets for machine reading comprehension
and question-answering. We believe that the scale and the real-world nature of
this dataset makes it attractive for benchmarking machine reading comprehension
and question-answering models.
|
2016-11-28T18:14:11Z
| null | null | null | null | null | null | null | null | null | null |
1,611.10012
|
Speed/accuracy trade-offs for modern convolutional object detectors
|
['Jonathan Huang', 'Vivek Rathod', 'Chen Sun', 'Menglong Zhu', 'Anoop Korattikara', 'Alireza Fathi', 'Ian Fischer', 'Zbigniew Wojna', 'Yang Song', 'Sergio Guadarrama', 'Kevin Murphy']
|
['cs.CV']
|
The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task.
|
2016-11-30T06:06:15Z
|
Accepted to CVPR 2017
| null | null | null | null | null | null | null | null | null |
1,612.00496
|
3D Bounding Box Estimation Using Deep Learning and Geometry
|
['Arsalan Mousavian', 'Dragomir Anguelov', 'John Flynn', 'Jana Kosecka']
|
['cs.CV']
|
We present a method for 3D object detection and pose estimation from a single
image. In contrast to current techniques that only regress the 3D orientation
of an object, our method first regresses relatively stable 3D object properties
using a deep convolutional neural network and then combines these estimates
with geometric constraints provided by a 2D object bounding box to produce a
complete 3D bounding box. The first network output estimates the 3D object
orientation using a novel hybrid discrete-continuous loss, which significantly
outperforms the L2 loss. The second output regresses the 3D object dimensions,
which have relatively little variance compared to alternatives and can often be
predicted for many object types. These estimates, combined with the geometric
constraints on translation imposed by the 2D bounding box, enable us to recover
a stable and accurate 3D object pose. We evaluate our method on the challenging
KITTI object detection benchmark both on the official metric of 3D orientation
estimation and also on the accuracy of the obtained 3D bounding boxes. Although
conceptually simple, our method outperforms more complex and computationally
expensive approaches that leverage semantic segmentation, instance level
segmentation and flat ground priors and sub-category detection. Our
discrete-continuous loss also produces state of the art results for 3D
viewpoint estimation on the Pascal 3D+ dataset.
|
2016-12-01T22:16:48Z
|
To appear in IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2017
| null | null | null | null | null | null | null | null | null |
1,612.00593
|
PointNet: Deep Learning on Point Sets for 3D Classification and
Segmentation
|
['Charles R. Qi', 'Hao Su', 'Kaichun Mo', 'Leonidas J. Guibas']
|
['cs.CV']
|
Point cloud is an important type of geometric data structure. Due to its
irregular format, most researchers transform such data to regular 3D voxel
grids or collections of images. This, however, renders data unnecessarily
voluminous and causes issues. In this paper, we design a novel type of neural
network that directly consumes point clouds and well respects the permutation
invariance of points in the input. Our network, named PointNet, provides a
unified architecture for applications ranging from object classification, part
segmentation, to scene semantic parsing. Though simple, PointNet is highly
efficient and effective. Empirically, it shows strong performance on par or
even better than state of the art. Theoretically, we provide analysis towards
understanding of what the network has learnt and why the network is robust with
respect to input perturbation and corruption.
|
2016-12-02T08:40:40Z
|
CVPR 2017
| null | null | null | null | null | null | null | null | null |
1,612.00796
|
Overcoming catastrophic forgetting in neural networks
|
['James Kirkpatrick', 'Razvan Pascanu', 'Neil Rabinowitz', 'Joel Veness', 'Guillaume Desjardins', 'Andrei A. Rusu', 'Kieran Milan', 'John Quan', 'Tiago Ramalho', 'Agnieszka Grabska-Barwinska', 'Demis Hassabis', 'Claudia Clopath', 'Dharshan Kumaran', 'Raia Hadsell']
|
['cs.LG', 'cs.AI', 'stat.ML']
|
The ability to learn tasks in a sequential fashion is crucial to the
development of artificial intelligence. Neural networks are not, in general,
capable of this and it has been widely thought that catastrophic forgetting is
an inevitable feature of connectionist models. We show that it is possible to
overcome this limitation and train networks that can maintain expertise on
tasks which they have not experienced for a long time. Our approach remembers
old tasks by selectively slowing down learning on the weights important for
those tasks. We demonstrate our approach is scalable and effective by solving a
set of classification tasks based on the MNIST hand written digit dataset and
by learning several Atari 2600 games sequentially.
|
2016-12-02T19:18:37Z
| null | null |
10.1073/pnas.1611835114
| null | null | null | null | null | null | null |
1,612.0184
|
FMA: A Dataset For Music Analysis
|
['Michaël Defferrard', 'Kirell Benzi', 'Pierre Vandergheynst', 'Xavier Bresson']
|
['cs.SD', 'cs.IR']
|
We introduce the Free Music Archive (FMA), an open and easily accessible
dataset suitable for evaluating several tasks in MIR, a field concerned with
browsing, searching, and organizing large music collections. The community's
growing interest in feature and end-to-end learning is however restrained by
the limited availability of large audio datasets. The FMA aims to overcome this
hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio
from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a
hierarchical taxonomy of 161 genres. It provides full-length and high-quality
audio, pre-computed features, together with track- and user-level metadata,
tags, and free-form text such as biographies. We here describe the dataset and
how it was created, propose a train/validation/test split and three subsets,
discuss some suitable MIR tasks, and evaluate some baselines for genre
recognition. Code, data, and usage examples are available at
https://github.com/mdeff/fma
|
2016-12-06T14:58:59Z
|
ISMIR 2017 camera-ready
| null | null | null | null | null | null | null | null | null |
1,612.03144
|
Feature Pyramid Networks for Object Detection
|
['Tsung-Yi Lin', 'Piotr Dollár', 'Ross Girshick', 'Kaiming He', 'Bharath Hariharan', 'Serge Belongie']
|
['cs.CV']
|
Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available.
|
2016-12-09T19:55:54Z
| null | null | null | null | null | null | null | null | null | null |
1,612.03651
|
FastText.zip: Compressing text classification models
|
['Armand Joulin', 'Edouard Grave', 'Piotr Bojanowski', 'Matthijs Douze', 'Hérve Jégou', 'Tomas Mikolov']
|
['cs.CL', 'cs.LG']
|
We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy.
|
2016-12-12T12:51:03Z
|
Submitted to ICLR 2017
| null | null |
FastText.zip: Compressing text classification models
|
['Armand Joulin', 'Edouard Grave', 'Piotr Bojanowski', 'Matthijs Douze', 'H. Jégou', 'Tomas Mikolov']
| 2,016
|
arXiv.org
| 1,216
| 45
|
['Computer Science']
|
1,612.06321
|
Large-Scale Image Retrieval with Attentive Deep Local Features
|
['Hyeonwoo Noh', 'Andre Araujo', 'Jack Sim', 'Tobias Weyand', 'Bohyung Han']
|
['cs.CV']
|
We propose an attentive local feature descriptor suitable for large-scale
image retrieval, referred to as DELF (DEep Local Feature). The new feature is
based on convolutional neural networks, which are trained only with image-level
annotations on a landmark image dataset. To identify semantically useful local
features for image retrieval, we also propose an attention mechanism for
keypoint selection, which shares most network layers with the descriptor. This
framework can be used for image retrieval as a drop-in replacement for other
keypoint detectors and descriptors, enabling more accurate feature matching and
geometric verification. Our system produces reliable confidence scores to
reject false positives---in particular, it is robust against queries that have
no correct match in the database. To evaluate the proposed descriptor, we
introduce a new large-scale dataset, referred to as Google-Landmarks dataset,
which involves challenges in both database and query such as background
clutter, partial occlusion, multiple landmarks, objects in variable scales,
etc. We show that DELF outperforms the state-of-the-art global and local
descriptors in the large-scale setting by significant margins. Code and dataset
can be found at the project webpage:
https://github.com/tensorflow/models/tree/master/research/delf .
|
2016-12-19T19:35:56Z
|
ICCV 2017. Code and dataset available:
https://github.com/tensorflow/models/tree/master/research/delf
| null | null | null | null | null | null | null | null | null |
1,612.07695
|
MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving
|
['Marvin Teichmann', 'Michael Weber', 'Marius Zoellner', 'Roberto Cipolla', 'Raquel Urtasun']
|
['cs.CV', 'cs.RO']
|
While most approaches to semantic reasoning have focused on improving
performance, in this paper we argue that computational times are very important
in order to enable real time applications such as autonomous driving. Towards
this goal, we present an approach to joint classification, detection and
semantic segmentation via a unified architecture where the encoder is shared
amongst the three tasks. Our approach is very simple, can be trained end-to-end
and performs extremely well in the challenging KITTI dataset, outperforming the
state-of-the-art in the road segmentation task. Our approach is also very
efficient, taking less than 100 ms to perform all tasks.
|
2016-12-22T16:55:02Z
|
9 pages, 7 tables and 9 figures; first place on Kitti Road
Segmentation; Code on GitHub (https://github.com/MarvinTeichmann/MultiNet)
| null | null |
MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving
|
['Marvin Teichmann', 'Michael Weber', 'Johann Marius Zöllner', 'R. Cipolla', 'R. Urtasun']
| 2,016
|
2018 IEEE Intelligent Vehicles Symposium (IV)
| 702
| 68
|
['Computer Science']
|
1,612.08083
|
Language Modeling with Gated Convolutional Networks
|
['Yann N. Dauphin', 'Angela Fan', 'Michael Auli', 'David Grangier']
|
['cs.CL']
|
The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks.
|
2016-12-23T20:32:33Z
| null | null | null | null | null | null | null | null | null | null |
1,612.08242
|
YOLO9000: Better, Faster, Stronger
|
['Joseph Redmon', 'Ali Farhadi']
|
['cs.CV']
|
We introduce YOLO9000, a state-of-the-art, real-time object detection system
that can detect over 9000 object categories. First we propose various
improvements to the YOLO detection method, both novel and drawn from prior
work. The improved model, YOLOv2, is state-of-the-art on standard detection
tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At
40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like
Faster RCNN with ResNet and SSD while still running significantly faster.
Finally we propose a method to jointly train on object detection and
classification. Using this method we train YOLO9000 simultaneously on the COCO
detection dataset and the ImageNet classification dataset. Our joint training
allows YOLO9000 to predict detections for object classes that don't have
labelled detection data. We validate our approach on the ImageNet detection
task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite
only having detection data for 44 of the 200 classes. On the 156 classes not in
COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes;
it predicts detections for more than 9000 different object categories. And it
still runs in real-time.
|
2016-12-25T07:21:38Z
| null | null | null |
YOLO9000: Better, Faster, Stronger
|
['Joseph Redmon', 'Ali Farhadi']
| 2,016
|
Computer Vision and Pattern Recognition
| 15,699
| 20
|
['Computer Science']
|
1,701.02718
|
See the Glass Half Full: Reasoning about Liquid Containers, their Volume
and Content
|
['Roozbeh Mottaghi', 'Connor Schenck', 'Dieter Fox', 'Ali Farhadi']
|
['cs.CV']
|
Humans have rich understanding of liquid containers and their contents; for
example, we can effortlessly pour water from a pitcher to a cup. Doing so
requires estimating the volume of the cup, approximating the amount of water in
the pitcher, and predicting the behavior of water when we tilt the pitcher.
Very little attention in computer vision has been made to liquids and their
containers. In this paper, we study liquid containers and their contents, and
propose methods to estimate the volume of containers, approximate the amount of
liquid in them, and perform comparative volume estimations all from a single
RGB image. Furthermore, we show the results of the proposed model for
predicting the behavior of liquids inside containers when one tilts the
containers. We also introduce a new dataset of Containers Of liQuid contEnt
(COQE) that contains more than 5,000 images of 10,000 liquid containers in
context labelled with volume, amount of content, bounding box annotation, and
corresponding similar 3D CAD models.
|
2017-01-10T18:25:15Z
| null | null | null | null | null | null | null | null | null | null |
1,701.03755
|
What Can I Do Now? Guiding Users in a World of Automated Decisions
|
['Matthias Gallé']
|
['stat.ML']
|
More and more processes governing our lives use in some part an automatic
decision step, where -- based on a feature vector derived from an applicant --
an algorithm has the decision power over the final outcome. Here we present a
simple idea which gives some of the power back to the applicant by providing
her with alternatives which would make the decision algorithm decide
differently. It is based on a formalization reminiscent of methods used for
evasion attacks, and consists in enumerating the subspaces where the
classifiers decides the desired output. This has been implemented for the
specific case of decision forests (ensemble methods based on decision trees),
mapping the problem to an iterative version of enumerating $k$-cliques.
|
2017-01-13T17:49:47Z
|
presented at BigIA 2016 workshop: http://bigia2016.irisa.fr/
| null | null |
What Can I Do Now? Guiding Users in a World of Automated Decisions
|
['Matthias Gallé']
| 2,017
| null | 0
| 13
|
['Mathematics', 'Computer Science']
|
1,701.06538
|
Outrageously Large Neural Networks: The Sparsely-Gated
Mixture-of-Experts Layer
|
['Noam Shazeer', 'Azalia Mirhoseini', 'Krzysztof Maziarz', 'Andy Davis', 'Quoc Le', 'Geoffrey Hinton', 'Jeff Dean']
|
['cs.LG', 'cs.CL', 'cs.NE', 'stat.ML']
|
The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost.
|
2017-01-23T18:10:00Z
| null | null | null | null | null | null | null | null | null | null |
1,701.07875
|
Wasserstein GAN
|
['Martin Arjovsky', 'Soumith Chintala', 'Léon Bottou']
|
['stat.ML', 'cs.LG']
|
We introduce a new algorithm named WGAN, an alternative to traditional GAN
training. In this new model, we show that we can improve the stability of
learning, get rid of problems like mode collapse, and provide meaningful
learning curves useful for debugging and hyperparameter searches. Furthermore,
we show that the corresponding optimization problem is sound, and provide
extensive theoretical work highlighting the deep connections to other distances
between distributions.
|
2017-01-26T21:10:29Z
| null | null | null |
Wasserstein GAN
|
['Martín Arjovsky', 'Soumith Chintala', 'Léon Bottou']
| 2,017
|
arXiv.org
| 4,837
| 26
|
['Mathematics', 'Computer Science']
|
1,701.08071
|
Emotion Recognition From Speech With Recurrent Neural Networks
|
['Vladimir Chernykh', 'Pavel Prikhodko']
|
['cs.CL']
|
In this paper the task of emotion recognition from speech is considered.
Proposed approach uses deep recurrent neural network trained on a sequence of
acoustic features calculated over small speech intervals. At the same time
special probabilistic-nature CTC loss function allows to consider long
utterances containing both emotional and neutral parts. The effectiveness of
such an approach is shown in two ways. Firstly, the comparison with recent
advances in this field is carried out. Secondly, human performance on the same
task is measured. Both criteria show the high quality of the proposed method.
|
2017-01-27T14:50:36Z
| null | null | null |
Emotion Recognition From Speech With Recurrent Neural Networks
|
['V. Chernykh', 'Grigoriy Sterling', 'Pavel Prihodko']
| 2,017
|
arXiv.org
| 117
| 11
|
['Computer Science']
|
1,701.08118
|
Measuring the Reliability of Hate Speech Annotations: The Case of the
European Refugee Crisis
|
['Björn Ross', 'Michael Rist', 'Guillermo Carbonell', 'Benjamin Cabrera', 'Nils Kurowsky', 'Michael Wojatzki']
|
['cs.CL']
|
Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation.
|
2017-01-27T17:09:07Z
| null |
Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9
|
10.17185/duepublico/42132
| null | null | null | null | null | null | null |
1,702.00992
|
Automatic Prediction of Discourse Connectives
|
['Eric Malmi', 'Daniele Pighin', 'Sebastian Krause', 'Mikhail Kozhevnikov']
|
['cs.CL']
|
Accurate prediction of suitable discourse connectives (however, furthermore,
etc.) is a key component of any system aimed at building coherent and fluent
discourses from shorter sentences and passages. As an example, a dialog system
might assemble a long and informative answer by sampling passages extracted
from different documents retrieved from the Web. We formulate the task of
discourse connective prediction and release a dataset of 2.9M sentence pairs
separated by discourse connectives for this task. Then, we evaluate the
hardness of the task for human raters, apply a recently proposed decomposable
attention (DA) model to this task and observe that the automatic predictor has
a higher F1 than human raters (32 vs. 30). Nevertheless, under specific
conditions the raters still outperform the DA model, suggesting that there is
headroom for future improvements.
|
2017-02-03T13:06:25Z
|
This is a pre-print of an article appearing at LREC 2018
| null | null | null | null | null | null | null | null | null |
1,702.04066
|
JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction
|
['Courtney Napoles', 'Keisuke Sakaguchi', 'Joel Tetreault']
|
['cs.CL']
|
We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for
developing and evaluating grammatical error correction (GEC). Unlike other
corpora, it represents a broad range of language proficiency levels and uses
holistic fluency edits to not only correct grammatical errors but also make the
original text more native sounding. We describe the types of corrections made
and benchmark four leading GEC systems on this corpus, identifying specific
areas in which they do well and how they can improve. JFLEG fulfills the need
for a new gold standard to properly assess the current state of GEC.
|
2017-02-14T03:47:34Z
|
To appear in EACL 2017 (short papers)
| null | null | null | null | null | null | null | null | null |
1,702.05373
|
EMNIST: an extension of MNIST to handwritten letters
|
['Gregory Cohen', 'Saeed Afshar', 'Jonathan Tapson', 'André van Schaik']
|
['cs.CV']
|
The MNIST dataset has become a standard benchmark for learning,
classification and computer vision systems. Contributing to its widespread
adoption are the understandable and intuitive nature of the task, its
relatively small size and storage requirements and the accessibility and
ease-of-use of the database itself. The MNIST database was derived from a
larger dataset known as the NIST Special Database 19 which contains digits,
uppercase and lowercase handwritten letters. This paper introduces a variant of
the full NIST dataset, which we have called Extended MNIST (EMNIST), which
follows the same conversion paradigm used to create the MNIST dataset. The
result is a set of datasets that constitute a more challenging classification
tasks involving letters and digits, and that shares the same image structure
and parameters as the original MNIST task, allowing for direct compatibility
with all existing classifiers and systems. Benchmark results are presented
along with a validation of the conversion process through the comparison of the
classification results on converted NIST digits and the MNIST digits.
|
2017-02-17T15:06:14Z
|
The dataset is now available for download from
https://www.westernsydney.edu.au/bens/home/reproducible_research/emnist. This
link is also included in the revised article
| null | null | null | null | null | null | null | null | null |
1,702.08734
|
Billion-scale similarity search with GPUs
|
['Jeff Johnson', 'Matthijs Douze', 'Hervé Jégou']
|
['cs.CV', 'cs.DB', 'cs.DS', 'cs.IR']
|
Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility.
|
2017-02-28T10:42:31Z
| null | null | null | null | null | null | null | null | null | null |
1,703.01365
|
Axiomatic Attribution for Deep Networks
|
['Mukund Sundararajan', 'Ankur Taly', 'Qiqi Yan']
|
['cs.LG']
|
We study the problem of attributing the prediction of a deep network to its
input features, a problem previously studied by several other works. We
identify two fundamental axioms---Sensitivity and Implementation Invariance
that attribution methods ought to satisfy. We show that they are not satisfied
by most known attribution methods, which we consider to be a fundamental
weakness of those methods. We use the axioms to guide the design of a new
attribution method called Integrated Gradients. Our method requires no
modification to the original network and is extremely simple to implement; it
just needs a few calls to the standard gradient operator. We apply this method
to a couple of image models, a couple of text models and a chemistry model,
demonstrating its ability to debug networks, to extract rules from a network,
and to enable users to engage with models better.
|
2017-03-04T00:18:49Z
| null | null | null |
Axiomatic Attribution for Deep Networks
|
['Mukund Sundararajan', 'Ankur Taly', 'Qiqi Yan']
| 2,017
|
International Conference on Machine Learning
| 6,065
| 35
|
['Computer Science']
|
1,703.034
|
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
|
['Chelsea Finn', 'Pieter Abbeel', 'Sergey Levine']
|
['cs.LG', 'cs.AI', 'cs.CV', 'cs.NE']
|
We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies.
|
2017-03-09T18:58:03Z
|
ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/
| null | null | null | null | null | null | null | null | null |
1,703.04009
|
Automated Hate Speech Detection and the Problem of Offensive Language
|
['Thomas Davidson', 'Dana Warmsley', 'Michael Macy', 'Ingmar Weber']
|
['cs.CL']
|
A key challenge for automatic hate-speech detection on social media is the
separation of hate speech from other instances of offensive language. Lexical
detection methods tend to have low precision because they classify all messages
containing particular terms as hate speech and previous work using supervised
learning has failed to distinguish between the two categories. We used a
crowd-sourced hate speech lexicon to collect tweets containing hate speech
keywords. We use crowd-sourcing to label a sample of these tweets into three
categories: those containing hate speech, only offensive language, and those
with neither. We train a multi-class classifier to distinguish between these
different categories. Close analysis of the predictions and the errors shows
when we can reliably separate hate speech from other offensive language and
when this differentiation is more difficult. We find that racist and homophobic
tweets are more likely to be classified as hate speech but that sexist tweets
are generally classified as offensive. Tweets without explicit hate keywords
are also more difficult to classify.
|
2017-03-11T18:20:13Z
|
To appear in the Proceedings of ICWSM 2017. Please cite that version
| null | null | null | null | null | null | null | null | null |
1,703.05175
|
Prototypical Networks for Few-shot Learning
|
['Jake Snell', 'Kevin Swersky', 'Richard S. Zemel']
|
['cs.LG', 'stat.ML']
|
We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset.
|
2017-03-15T14:31:55Z
| null | null | null | null | null | null | null | null | null | null |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 16