Deep learning eight open source frameworks
Introduction: Deep Learning is a method of characterizing learning based on data in machine learning. The benefits of deep learning are Unsupervised or semi-supervised The feature learning, hierarchical feature extraction efficient algorithm to replace the manual acquisition features (features). As the most popular topic at the moment, giants such as Google, Facebook, and Microsoft have invested in a series of emerging projects focusing on deep learning. They have also been supporting open source deep learning frameworks.
The deep learning framework currently being used by researchers is not the same. There are TensorFlow, Torch, Caffe, Theano, Deeplearning4j, etc. These deep learning frameworks are used in computer vision , speech recognition , natural language processing and bioinformatics , and acquired. Excellent results.
Let's take a look at the eight most commonly used open source frameworks in deep learning today:
TensorFlow is an open source math calculation software that uses the Data Flow Graph to perform calculations. The nodes in the figure represent mathematical operations, while the lines in the figure represent interactions between tensors. TensorFlow's flexible architecture can be deployed on one or more CPUs, GPUs, desktops and servers, or use a single API for mobile devices. TensorFlow was originally developed by researchers and the Google Brain team for research on machine learning and deep neural networks. It is currently open source and can be applied in almost every field.
Data Flow Graph: Describe mathematical calculations using nodes and edges of a directed graph. The nodes in the graph represent mathematical operations and can also represent the endpoints of data input and output. Edges represent the relationships between nodes, passing tensors between operations, and the tensor flows in the graph - this is where the TensorFlow name comes from. Once the nodes are connected to the data stream, the nodes are assigned to asynchronous (inter-node), parallel (intra-node) execution on the computing device.
Mobility: TensorFlow is not just a regular neural network library. In fact, if you can represent your calculation as a data flow graph, you can use TensorFlow. Users build graphs and write inner-loop code-driven calculations. TensorFlow can help with assembly of subgraphs. Defining new operations requires only writing a Python function. If you lack the underlying data operations, you need to write some C++ code definition operations.
Strong adaptability: Can be applied to different devices, cpus, gpu, mobile devices, cloud platforms, etc.
Automatic Difference: TensorFlow's automatic differencing capabilities are beneficial for many Graph-based machine learning algorithms
Multiple programming languages â€‹â€‹are optional: TensorFlow is easy to use, with python interface and C++ interface. Other languages â€‹â€‹can use the SWIG tool to use the interface. (SWIGâ€”Simplified Wrapper and Interface Generator is a very good open source tool that supports integrating C/C++ code with any major scripting language.)
Optimized Performance: With full use of hardware resources, TensorFlow can allocate different units of graph execution to different devices and use TensorFlow to process copies.
Torch is a scientific computing framework supported by a large number of machine learning algorithms. It has been in existence for ten years, but the real rise is due to Facebook's open source extensive Torch deep learning modules and extensions. Another special feature of Torch is the use of the programming language Lua (which was used to develop video games).
Simple to build the model
Fast and efficient GPU support
Access to C via LuaJIT
Numerical optimization procedures, etc.
Embeds interfaces to iOS, Android, and FPGA backends
Caffe was developed by PHD Jia Yangqing of the University of California, Berkeley. The full name is Convolutional Architecture for Fast Feature Embedding. It is a clear and efficient open source deep learning framework currently maintained by the Berkeley Vision and Learning Center (BVLC). (Jia Yangqing worked in MSRA, NEC, and Google Brain. He is also one of the authors of TensorFlow and currently works in the Facebook FAIR lab.)
Caffe basic flow: Caffe follows a simple assumption of a neural network - all calculations are expressed in the form of a layer, the layer is to do some data, and then output some calculated results. For example, a convolution is to enter an image, then convolve with the filter of this layer, and then output the result of the convolution. Each layer needs to do two calculations: the forward forward is to calculate the output from the input, and the reverse backward is to calculate the gradient relative to the input from the gradient given above. Once these two functions are implemented, we will Many layers can be connected into a network. What this network does is input our data (images or voices, etc.), and then calculate the output we need (such as the identification tag). In training, we can There are tags to calculate the loss and gradient, then use gradient to update the network parameters.
Getting started: Models and corresponding optimizations are given as text instead of code
Fast speed: Ability to run the best models and mass data
Modular: Easy to extend to new tasks and settings
Openness: Open code and reference models for rendering
Good community: Can participate in development and discussion through BSD-2
Born in 2008 at the Montreal Institute of Technology, Theano derived a large number of deep-learning Python packages, most notably Blocks and Keras. At the heart of Theano is a compiler for mathematical expressions that knows how to get your structure. Make it a highly efficient code that uses numpy, efficient native libraries such as BLAS and native code (C++) to run as fast as possible on a CPU or GPU. It is specifically designed for the calculations needed to process large neural network algorithms in deep learning and is one of the first of its kind (development started in 2007) and is considered to be the industry standard for deep learning research and development.
Integrate NumPy - Use numpy.ndarray
Accelerate calculations with GPU - 140x faster than CPU (32-bit float only)
Effective Symbol Differentiation - Calculating the Derivative of a One or Multivariate Function
Speed â€‹â€‹and stability optimization - such as the value of the function log(1+x) that can calculate very small x
Generate C code dynamically - faster calculation
Extensive unit testing and self-verification - Detect and diagnose multiple errors
As its name suggests, Deeplearning4j is a deep learning framework for "for Java" and the first commercial-level deep learning open source library. Deeplearning4j was released by the startup company Skymind in June 2014. There are many Star Enterprise companies such as Accenture, Chevrolet, Booz Consulting and IBM using Deeplearning4j. DeepLearning4j is a high-level deep learning open source library for production environment and business applications. It can be integrated with Hadoop and Spark, plug and play, and allows developers to quickly integrate deep learning functions in the APP, which can be applied to the following deep learning areas:
Speech to text
Spam filtering (abnormal detection)
E-commerce fraud detection
In addition to the above several well-known and well-known projects, there are also a lot of characteristics of deep learning open source framework also worthy of attention:
Developers from projects such as CXXNet, Minerva, and Purine are mainly written in C++. MXNet emphasizes improving the efficiency of memory usage and even running tasks such as image recognition on smartphones.
The system architecture of MXNet is shown below:
From top to bottom, they are the embedding of various main languages, programming interfaces (matrix operations, symbolic expressions, distributed communications), unified system implementation of two programming modes, and support of various hardware.
A preferred deep-learning startup from Japan, Preferred Networks, released a Python framework in June of this year. Chainer's design is based on the definition by run principle, which means that the network is dynamically defined during operation, not defined at startup.
PS : This article was compiled by Lei Feng Network (search â€œLei Feng Networkâ€ public number) and it was compiled without permission.
Via KDnuggets et al
HIGH_PT Disposable Vape
Maskking(Shenzhen) Technology CO., LTD , https://www.szdisposablevape.com