Halaman

    Social Items

Showing posts with label Deep learning. Show all posts
Showing posts with label Deep learning. Show all posts

We all have seen Facebook grow. It seems like yesterday that many of us were introduced to this new social network with a naive impression of sharing pictures with friends and family. Today, the platform is considerably more robust and continually unfolds new features over basic networking with old friends. Facebook is building a business with long-term prospects where the role of Facebook Artificial Intelligence is unfathomable. Better yet, it is crucial.



Facebook is building its business at high speed by learning about its users and packaging their data for the benefit of advertisers. The company functions around the goal of connecting every person on the planet through Facebook-owned tech products and services (such as Whatsapp, Instagram, Oculus, and more) within 100 years. To crush that iceberg, AI is the way.

Facebook has evolved as a platform enabling conversation and communication between people as a highly valuable source of knowing their lifestyle, interests, behavior patterns, and taste inside and out. What do individual users like? What don’t they like? This data — voluntarily provide but messily structured — can be utilized for profit at an exorbitant value.

That’s where AI comes in. AI enables machines to learn to clarify data, all by themselves. The simplest example of this would be AI image analysis identifying a dog, without telling that machine what a dog looks like. This begins to give structure to unstructured data. It quantifies it and reprints it in the form from which understandable insights can then be generated.

And that’s just the beginning. There are many use cases of how Facebook is revolutionizing its business through the use of Artificial Intelligence.

Analyzing Text

Believe it or not, a large amount of data shared on Facebook is still text. Videos are all the rage considering the high engagement and larger data volume in terms of megabytes, but the text provides better value. After all, a written explanation is always better than even a good video or image representing the same.

A brilliant tool used by Facebook is called Deeptext, which deciphers the meaning of the content posted to find the relative meaning. Facebook then generates leads with this tool by directing people to advertisers based on the conversations they are having. It offers user-related shopping links to connect chats and posts to potential interests.  

Mapping Population Density

Through the use of AI, Facebook is now working to map the world’s population density. The company revealed some details about this immersive technology back in 2016 when it created maps for 22 nations. Today, Facebook’s maps cover the majority of Africa and it won’t be long before the whole world’s population is mapped. With the help of satellite imagery and AI, this tedious task is getting completed. As per Facebook’s latest reveal — their all-new machine learning systems are faster and more efficient than originally released in 2016.

Rigorous evolution has made this possible — through on-the-ground and high-resolution satellite imagery. Facebook’s in-house teams and third-party partners have intensified their efforts in making it an unprecedented job. This is revolutionary work, but more than that it will have humanitarian benefits and applications. The data will be of humongous help for disaster relief and vaccination schemes.

Easy Translation

From an endless number of people operating Facebook all over the world, language has always been a barrier. This is simplified by Facebook's Artificial Intelligence-based automatic translation system. The Applied Machine Learning team helps 800 million people every month find preferred translated posts in their news feed. Since Facebook is all about human interactions, people fill their feeds with expressions and emotions. Hence, translation is crucial to social interactions on the site.

Chatbots

From automated subscription content like weather and traffic to customized communication like receipts, shipping notifications and live automated messages, using the site has become easier and more efficient with chatbots at our service. Facebook has a powerful and highly functional bot API for the Messenger platform that does three functions smoothly

  • Send/receive API. This API  is all about sending and receiving text, images and rich bubbles comprised of multiple calls-to-action. A welcome screen for threads can also be created.
  • Message template.  Facebook offers developers pre-made message templates which allow customers to tap buttons and see beautiful, template images. This is much easier than having to code a new programming language for bot interaction. Structured messages with call-to-actions are amazingly user-friendly.
  • Welcome screen. Offering a tool to customize your experience, the Messenger app is all about better communication and retrieving the result as needed. And the welcome screen initiates this journey. Here people discover chatbot features and initiate the conversation.

Caffe2go

Another feature utilizing artificial intelligence on Facebook is Caffe2go, which enables the Facebook app to transform video — not just photos — using machine learning in real-time by adding artsy touches — on your phone! Similar to Prisma, the feature is great for recording live videos and transforming it with creative effects that historically required sending video to a data center for processing. Caffe2go works offline and renders live. This technique is literally AI in the palm of your hands and provides everyone with state-of-the-art creative tools for expressing their creativity freely and in a flash.


         


Preventing Suicide

Around the world, suicide is the second leading cause of death for 15 to 29-year olds. Thankfully, Facebook can now help prevent suicides through the use of AI. AI can signal posts of people who might be in need and/or perhaps driven by suicidal tendencies. The AI uses machine learning to flag key phrases in posts and concerned comments from friends or family members to help identify users who may be at risk. Analyzing human nuance as a whole is quite complex, but AI is able to track it the context and understand what is a suicidal pattern and what isn’t. It’s great to see that Facebook and other social media sites are doing their part to help with this issue.  

Detecting Bad Content

The thorniest social media issues are always related to security and privacy. In addition to the already discussed, Facebook is using AI to detect content falling into seven main categories:  nudity, graphic violence, terrorism, hate speech, spam, fake account and suicide prevention. AI helps identify fake accounts created for malicious purposes and shuts them down instantly.

Hate speech is tricky stuff. Requiring the combined efforts of AI and the company’s community standards team, it is a tough nut to crack. It’s always difficult to track whether hate speech is actually there or if there is a nuance to be considered. That’s why the current scenario involves both AI automatically flagging potential hate speech along with follow-up manual review. In other areas, Facebook’s AI system relies on computer vision and raises a degree of confidence in order to determine whether or not to remove the content.

Summary

In a nutshell, AI is here to stay and is surely going to make a drastic impact in the way Facebook serves both users and advertisers. Although it has always remained tight-lipped about future inventions, Facebook is always utilizing technology to offer new features and services each year. With so many AI-based initiatives on board, Facebook is able to handle new challenges and explore new paths. After all, innovation has no end.

What to learn more about AI and how to help businesses integrate it into their service offerings? Then join the Vietnam AI Grand Challenge, a series of hackathons focused on Artificial Intelligence with the goal of building the Ultimate AI Virtual Assistant. The series is organized by Kambria with the support of the Vietnam Ministry of Science & Technology and the Ministry of Planning & Investment.

Participants will be guided and trained in AI technology through a series of educational workshops and will have the opportunity to work with leading AI experts in Vietnam and Silicon Valley. Winners will also have a chance to develop their project in a one-month incubation program before competing in Grand Finale in August.

Cheers

Way of Facebook Uses Artificial Intelligence

 


This is the second progress video about the port of openFrameworks to the Jetson.

The demonstration sketch shows that most of the OpenGL issues have been sorted out. Also, this is the first demo that includes a GLSL shader that is rendering the background. So some good progress is being made.

Two different openFrameworks add-ons are being demonstrated:

ofxTimeline and ofxUI .

ofxTimeline is being used to control the virtual camera movement. The timeline runs a 1-minute loop that controls the pan, zoom, and orbit of the camera. ofxUI provides the GUI element at the bottom left-hand corner which notates the camera position in a dynamic manner.

The depth information from the Kinect is being rendered in what is referred to as a point cloud. Basically, a 3D point mesh is constructed for each frame that is being displayed, and the color of each point is calculated from the color camera (RGB), then displayed. There is no CUDAof the process at this point, it’s just done on one of the ARM cores.

While there are still a few bugs hidden in the openFrameworks port, for the most part, everything is running smoothly.


Thanks & Cheers

For more stackexchanges-beta

Jetson TK1 Kinect Point Cloud in openFrameworks

 The Intel RealSense T265 Tracking Camera solves a fundamental problem in interfacing with the real world by helpfully answering “Where am I?” Looky here:



Background

One of the most important tasks in interfacing with the real world from a computer is to calculate your position in relationship to a map of the surrounding environment. When you do this dynamically, this is known as Simultaneous Localization And Mapping, or SLAM.

If you’ve been around the mobile robotics world at all (rovers, drones, cars), you probably have heard of this term. There are other applications too, such as Augmented Reality (AR) where a computing system must place the user precisely in the surrounding environment. Suffice it to say, it’s a foundational problem.

SLAM is a computational problem. How does a device construct or update a map of an unknown environment while simultaneously keeping track of its own location within that environment? People do this naturally in small places such as a house. At a larger scale, people have been clever enough to use visual navigational aids, such as the stars, to help build their maps.

This  V-SLAM solution does something very similar. Two fisheye cameras combine with the information from an  Inertial  Measurement  Unit (IMU) to navigate using visual features to track its way around even unknown environments with accuracy. 

Let’s just say that this is a non-trivial problem. If you have tried to implement this yourself, you know that it can be expensive and time consuming. The Intel RealSense T265 Tracking Camera provides precise and robust tracking that has been extensively tested in a variety of conditions and environments.

The T265 is a self-contained tracking system that plugs into a USB port. Install the librealsense SDK, and you can start streaming pose data right away.

Tech Stuffs

Here’s some tech specs:

Cameras

  • OV9282
  • Global Shutter, Fisheye Field of View = 163 degrees
  • Fixed Focus, Infrared Cut Filter
  • 848 x 800 resolution
  • 30 frames per second

Inertial Measurement Unit (IMU)

  • 6 Degrees of Freedom (6 DoF)
  • Accelerometer 
  • Gyroscope

Visual Processing Unit (VPU)

  • Movidius MA215x ASIC (Application Specific Integrated Circuit)

The Power Requirement is 300 mA at 5V (!!!). The package is 108mm Wide x 24.5mm High x 12.50mm Deep. The camera weighs 60 grams.

Installation

To interface with the camera,  Intel provides the open source library librealsense. On the JetsonHacksNano account on Github, there is a repository named installLibrealsense. The repository contains convenience scripts to install librealsense.

Note: Starting with L4T 32.2.1/JetPack 4.2.2 a swap file is now part of the default install. You do not need to create a swap file if you are using this release or later. Skip the following step if using 32.2.1 or above.

In order to use the install script, you will either need to create a swapfile to ease an out of memory issue, or modify the install script to run less jobs during the make process. In the video, we chose the swapfile route. To install the swapfile:

$ git clone https://github.com/jetsonhacksnano/installSwapfile
$ cd installSwapfile
$ ./installSwapfile.sh
$ cd ..

You’re now ready to install librealsense.

$ git clone https://github.com/jetsonhacksnano/installLibrealsense
$ cd installLibrealsense
$ ./installLibrealsense.sh

While the installLibrealsense.sh script has the option to compile the librealsense with CUDA support, we do not select that option. If you are using the T265 alone, there is no advantage in using CUDA, as the librealsense CUDA routines only convert images from the RealSense Depth cameras (D415, D435 and so on).

The location of librealsense SDK products:

  • The library is installed in /usr/local/lib
  • The header files are in /usr/local/include
  • The demos and tools are located in /usr/local/bin

Go to the demos and tools directory, and checkout the realsense-viewer application and all of the different demonstrations!

Conclusion

The Intel RealSense T265 is a powerful tool for use in robotics and augmented/virtual reality. Well worth checking out!

Notes

  • Tested on Jetson Nano L4T 32.1.0
  • If you have a mobile robot, you can send wheel odometry to the RealSense T265 through the librealsense SDK for better accuracy. The details are still being worked out.

Thanks, Cheers.

Jetson Nano – RealSense Tracking Camera

 Many people use Intel RealSense cameras with robots. Here we install the real sense-ros wrapper on the NVIDIA Jetson Nano developer kit. Looky here:




Background

There are several members in the Intel RealSense camera family. This includes the Depth Cameras (D415, D435, D435i) and Tracking Camera (T265). There are also more recent introductions which are just becoming available.

The cameras all share the same Intel® RealSense™ SDK which is known as librealsense2. The SDK is open source and available on Github. We have articles for installing librealsense (D400x article and T265 article) here on the JetsonHacks site.

The size and weight of the cameras make them very good candidates for robotic applications. Computing hardware onboard the cameras provide depth and tracking information directly, which makes it a very attractive addition to a Jetson Nano. Plus the cameras have low power consumption. Because ROS is the most popular middleware application for robotics, here’s how you install realsense-ros on the Jetson Nano.

Install RealSense Wrapper for ROS

There are two prerequisites for installing realsense-ros on the Jetson Nano. The first is to install librealsense as linked above. The second prerequisite is a ROS installation. Checkout Install ROS on Jetson Nano for a how-to on installing ROS Melodic on the Nano.

With the two prerequisites out of the way, it’s time to install realsense-ros. There are convenience scripts to install the RealSense ROS Wrapper on the Github JetsonHacksNano account.

$ git clone https://github.com/JetsonHacksNano/installRealSenseROS
$ cd installRealSenseROS
$ ./installRealSenseROS <catkin workplace name>

Where catkin workplace name is the path to the catkin_workspace to place the RealSense ROS package. If no catkin workplace name is specified, the script defaults to ~/catkin_ws.

Note: Version are in the releases section. The master branch of the repository will usually match the most recent release release of L4T, but you may have to look through the releases for a suitable version. To checkout one of the releases, switch to the installRealSenseROS directory and then:

$ git checkout <version number>

e.g.

$ git checkout vL4T32.2.1

The ROS launch file for the camera(s) will be in the src directory of the Catkin workspace realsense-ros/realsense2_camera/launch There are a variety of launch files to choose from. For example:

$ roslaunch realsense2_camera rs_camera.launch

You will need to make sure that your Catkin workspace is correctly sourced, and roscore is running.

Notes

There are dependencies between the versions of librealsense and realsense-ros. The install scripts are also dependent on the version of L4T. Check the releases on the Github accounts to match.

In the video:

  • Jetson Nano
  • L4T 32.2.1 / JetPack 4.2.2
  • librealsense 2.25.0
  • realsense-ros 2.28.0

realsense-ros does not “officially” support ROS Melodic. However, we haven’t encountered any issues as of the time of this writing.

Thanks Cheers

RealSense ROS Wrapper – Jetson Nano

 Back in September, we installed the Caffe Deep Learning Framework on a Jetson TX1 Development Kit. With the advent of the Jetson TX2, now is the time to install Caffe and compare the performance difference between the two. Looky here:



Background

As you recall, Caffe is a deep learning framework developed with cleanliness, readability, and speed in mind. It was created by Yangqing Jia during his PhD at UC Berkeley, and is in active development by the Berkeley Vision and Learning Center (BVLC) and by community contributors.

Over the last couple of years, a great deal of progress has been made in speeding up the performance of the supporting underlying software stack. In particular the cuDNN library has been tightly integrated with Caffe, giving a nice bump in performance.

Caffe Installation

A script is available in the JetsonHack Github repository which will install the dependencies for Caffe, download the source files, configure the build system, compile Caffe, and then run a suite of tests. Passing the tests indicates that Caffe is installed correctly.

This installation demonstration is for a NVIDIA Jetson TX2 running L4T 27.1, a 64-bit Ubuntu 16.04 variant. The installation of L4T 27.1 was done using JetPack 3.0, and includes installation of OpenCV4Tegra, CUDA 8.0, and cuDNN 5.1.

Before starting the installation, you may want to set the CPU and GPU clocks to maximum by running the script:

$ sudo ./jetson_clocks.sh

The script is in the home directory.

In order to install Caffe:

$ git clone https://github.com/jetsonhacks/installCaffeJTX2.git
$ cd installCaffeJTX2
$ ./installCaffe.sh

Installation should not require intervention, in the video installation of dependencies and compilation took about 14 minutes. Running the unit tests takes about 19 minutes. While not strictly necessary, running the unit tests makes sure that the installation is correct.

Test Results

At the end of the video, there are a couple of timed tests which can be compared with the Jetson TX1. The following table adds some more information:

Jetson TK1 vs. Jetson TX1 vs. Jetson TX2 Caffe GPU Example Comparison
10 iterations, times in milliseconds
MachineAverage FWDAverage BACKAverage FWD-BACK
Jetson TK1 (32-bit OS)234243478
Jetson TX1 (64-bit OS)80119200
Jetson TX2 (Mode Max-Q)7897175
Jetson TX2 (Mode Max-P)6585149
Jetson TX2 (Mode Max-N)5675132

The tests are running 50 iterations of the recognition pipeline, and each one is analyzing 10 different crops of the input image, so look at the ‘Average Forward pass’ time and divide by 10 to get the timing per recognition result. For the Max-N version of the Jetson TX2, that means that an image recognition takes about 5.6 ms.

The Jetson TX2 introduces the concept of performance modes. The Jetson TX1 has 4 ARM Cortex A57 CPU cores. In comparison, there are 6 CPU cores in the Tegra T2 SoC. Four are ARM Cortex-A57, the other two are NVIDIA Denver 2. Depending on performance and power requirements the cores can be taken on or offline, and the frequencies of their clocks set independently. There are five predefined modes available through the use of the nvpmodel CLI tool.

  • sudo nvpmodel -m 1 (Max-Q)
  • sudo nvpmodel -m 2 (Max-P)
  • sudo nvpmodel -m 0 (Max-N)

Max-Q uses only the 4 ARM A57 cores at a minimal clock frequency. Note that from the table, this gives performance equivalent to the Jetson TX1. Max-Q sets the power profile to be 7.5W, so this represents Jetson TX1 performance while only using half the amount of power of a TX1 at full speed!

Max-P also uses only the 4 ARM A57 cores, but at a faster clock frequency. From the table, we can see that the Average Forward Pass drops from the Max-Q value of 78 to the Max-P value of 65. My understanding is that Max-P limits power usage to 15W.

Finally, we can see that in Max-N mode the Jetson TX2 performs best of all. (Note: This wasn’t shown in the video, it’s a special bonus for our readers here!) In addition to the 4 ARM A57 cores the Denver 2 cores come on line, and the clocks on the CPU and the GPU are put to their maximum values. To put it in perspective, the Jetson TX1 at max clock runs the test in about ~10000 ms, the Jetson TX2 at Max-N runs the same test in ~6600 ms. Quite a bit of giddy-up.

Conclusion

Deep learning is in its infancy and as people explore its potential, the Jetson TX2 seems well positioned to take the lessons learned and deploy them in the embedded computing ecosystem. There are several different deep learning platforms being developed, the improvement in Caffe on the Jetson Dev Kits over the last couple of years is way impressive.

Notes

The installation in this video was done directly after flashing L4T 27.1 on to the Jetson TX2 with CUDA 8.0, cuDNN r5.1 and OpenCV4Tegra.

The latest Caffe commit used in the video is: 317d162acbe420c4b2d1faa77b5c18a3841c444c


Thanks Cheers

Caffe Deep Learning Framework – NVIDIA Jetson TX2