Tensor-Flow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across many computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs).
TensorFlow for Developers
This architecture gives flexibility to the application developer: whereas in the previous “parameter server” designs the management of a shared state built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. It supports a variety of applications and helps us to focus on neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become used for machine learning research. In this paper, we have described the Tensor Flow data-flow model and prove the compelling performance that Tensor- Flow achieves for several real-world applications. Tensor Flow is open-sourced in large part to allow the community to improve it with contributions. The TensorFlow team has set up processes to manage to pull requests, review and route issues filed, and answer Stack Overflow and mailing list questions.
We’ve had more than 890 external contributors add to code, with everything from small documentation fixes to large additions like OS X GPU support or the Open Cl implementation. (The broader Tensor Flow GitHub organization has had 1,000 unique non-Googler contributors.)
Tensorflow has more than 76,000 stars on GitHub. The number of other reports that use it is growing every month—as of this writing, there are more than 20,000.
Stack Overflow monitored by the TensorFlow team, and it’s a good way to get questions answered (with 8,000+ answered so far).
The external version of TensorFlow is no different than internal, beyond some minor differences. These include the interface to Google’s internal infrastructure (it would be no help to anyone) and some paths. The core of TensorFlow, yet, is identical. Pull requests to internal will appear within around a day and a half and vice-versa.
In the Tensor Flow GitHub.org, you can find not only TensorFlow itself, but a useful ecosystem of other ropes, including models, serving, TensorBoard, Project Magenta, and many more. (A few of these described below). You can also find TensorFlow APIs in many languages (Python, C++, Java, and Go); and the community has developed other bindings, including C#, Haskell, Julia, Ruby, Rust, and Scala.
TensorFlow has high standards around measurement and transparency. The team has developed a set of detailed benchmarks and has been very careful to include all the necessary details to reproduce. We’ve not yet run comparative benchmarks but would welcome others to publish comprehensive and reproducible benchmarks.
There’s a section of the Tensor Flow site with information for performance-minded developers. Optimization can often be model-specific, but some general guidelines can often make a big difference. The TensorFlow team has open-sourced a large number of models. You can find them in the TensorFlow/models repo. For many of these, the released code includes not only the model graph but also trained model weights. This means that you can try such models out of the box. We can tune many of them further using a process called transfer learning.
Many of the TensorFlow models include trained weights and examples that show how you can use them for transfer learning, e.g. to learn your classifications. You do this by deriving information about your input data from the penultimate layer of a trained model—which encodes useful abstractions—then use that as input to train your own much smaller neural net to predict your classes. Because of the power of the learned abstractions, the training does not need large data sets.
For example, you can use transfer learning with the establishment of the image. Classification model to train an image classifier that uses your specialized image data.
TensorFlow for Mobile
Mobile is a great use case for TensorFlow.Mobile makes sense when there is a poor or missing network connection. While sending nonstop information to a server would be excessively costly.
TensorFlow is working to help developers make lean mobile apps. Both by continuing to reduce the code footprint and by supporting quantization.
(And although it’s early days, see also Accelerated Linear Algebra [XLA], a domain-specific compiler for linear algebra that optimizes TensorFlow computations.)
One of the TensorFlow projects, MobileNet, is developing a set of computer vision models that are particularly designed to address the speed/accuracy trade-offs that need to consider on mobile devices or in embedded applications. The MobileNet models can found in the TensorFlow models repo as well.
One of the newer Android demos, TF Detect, uses a MobileNet model trained using the Tensorflow Object Detection API.
An especially hypnotizing highlight of TensorBoard is its embedding visualizer. Embedding is widespread in AI, and with regards to TensorFlow. It’s frequently normal to see tensors as focuses on space. So practically any TensorFlow model will offer ascent to different embedding.