Halaman

    Social Items

Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

 


As a recent grad, you’ve probably had at least a couple of experience working in a “real world” office. But the question is, what changes when you’re a full-time employee and not just a summer or semester-long intern?

Lucky for you, we scoured the web for the advice you need to know as you take your first steps into the big, bad workforce.

  1. College won’t teach you about these seven things you need to know about entering the workforce. (Mashable)
  2. Understanding that grammar counts is just one of the many pieces of unconventional career advice you should learn before you start your first job. (Forbes)
  3. Sheryl Sandberg has some great words of wisdom for recent grads just starting to look at the job market. First things first? Banish self-doubt. (Entrepreneur)
  4. Are you really all that prepared for the workforce? Studies show you may not be. (Slate)
  5. Soft skills? Yeah, those are really, really important when you’re starting off your career. (Fox Business)
  6. Forget what you need to do when starting out; here’s what not to do. (The New York Times)
  7. A lot of times new grads forget that there is in fact a transition period between college and the real world. (Quintessential Careers)
  8. Once you get settled in, there are nine things you should do during the first week of your job. (Business Insider)
See More: 



8 Crucial Things to Know Before Starting Your First Job

 For robotics applications, many consider Robot Operating System (ROS) as the default go-to solution. The version of ROS that runs on the NVIDIA Jetson Nano Developer Kit is ROS Melodic. Installing ROS on the Jetson Nano is simple. Looky here:

Background

ROS was originally developed at Stanford University as a platform to integrate methods drawn from all areas of artificial intelligence, including machine learning, vision, navigation, planning, reasoning, and speech/natural language processing.

From 2008 until 2013, development on ROS was performed primarily at the robotics research company Willow Garage who open-sourced the code. During that time, researchers at over 20 different institutions collaborated with Willow Garage and contributed to the codebase. In 2013, ROS stewardship transitioned to the Open Source Robotics Foundation.

From the ROS website:

The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.

Why? Because creating truly robust, general-purpose robot software is hard. From the robot’s perspective, problems that seem trivial to humans often vary wildly between instances of tasks and environments. Dealing with these variations is so hard that no single individual, laboratory, or institution can hope to do it on their own.

Core Components

At the lowest level, ROS offers a message passing interface that provides inter-process communication. Like most message-passing systems, ROS has a publish/subscribe mechanism along with request/response procedure calls. An important thing to remember about ROS, and one of the reasons that it is so powerful, is that you can run the system on a heterogeneous group of computers. This allows you to distribute tasks across different systems easily.

For example, you may want to have the Jetson running as the main node, and controlling other processors as control subsystems. A concrete example is to have the Jetson doing a high-level task like path planning, and instructing microcontrollers to perform lower-level tasks like controlling motors to drive the robot to a goal.

At a higher level, ROS provides facilities and tools for a Robot Description Language, diagnostics, pose estimation, localization, navigation, and visualization. 

You can read more about the Core Components here.

Installation

The installROS repository on the JetsonHacksNano account on Github contains convenience scripts to help us install ROS. The main script, installROS.sh, is a straightforward implementation of the install instructions taken from the ROS Wiki. The instructions install ROS Melodic on the Jetson Nano.

You can clone the repository on to the Jetson:

$ git clone https://github.com/JetsonHacksNano/installROS.git
$ cd installROS

installROS.sh

The install script installROS.sh will install the prerequisites and ROS packages you specify. Usage:

Usage: ./installROS.sh  [[-p package] | [-h]]
 -p | --package <packagename>  ROS package to install
                               Multiple Usage allowed
                               The first package should be a base package. One of the following:
                                 ros-melodic-ros-base
                                 ros-melodic-desktop
                                 ros-melodic-desktop-full
 

Default is ros-melodic-ros-base if do not specify any packages. Typically people will install ros-base if they are not running any desktop applications on the robot.

Example Usage:

$ ./installROS.sh -p ros-melodic-desktop -p ros-melodic-rgbd-launch

This script installs a baseline ROS environment. There are several tasks:

  • Enable repositories universe, multiverse, and restricted
  • Adds the ROS sources list
  • Sets the needed keys
  • Loads specified ROS packages (defaults to ros-melodic-base-ros if none specified)
  • Initializes rosdep

You can edit this file to add the ROS packages for your application.

setupCatkinWorkspace.sh

setupCatkinWorkspace.sh builds a Catkin Workspace.

Usage:

$ ./setupCatkinWorkspace.sh [optionalWorkspaceName]

where optionalWorkspaceName is the name and path of the workspace to be used. The default workspace name is catkin_ws. If a path is not specified, the default path is the current home directory. This script also sets up some ROS environment variables.

The script sets placeholders for some ROS environment variables in the file ~/.bashrc

The script .bashrc is located in the home directory. The preceding period indicates that the file is “hidden”. The names of the ROS variables that the script adds are (they should be towards the bottom of the .bashrc file):

  • ROS_MASTER_URI
  • ROS_IP

The script sets ROS_MASTER_URI to the local host, and basically lists the network interfaces after the ROS_IP entry. You will need to configure these variables for your robots network configuration and how you desire your network topology.

Notes

  • In the video, the Jetson Nano is freshly prepared with L4T 32.2.1 / JetPack 4.2.2
  • In the video, the Jetson Nano is running from a micro-SD card.
Cheers

Install ROS on Jetson Nano

We all have seen Facebook grow. It seems like yesterday that many of us were introduced to this new social network with a naive impression of sharing pictures with friends and family. Today, the platform is considerably more robust and continually unfolds new features over basic networking with old friends. Facebook is building a business with long-term prospects where the role of Facebook Artificial Intelligence is unfathomable. Better yet, it is crucial.



Facebook is building its business at high speed by learning about its users and packaging their data for the benefit of advertisers. The company functions around the goal of connecting every person on the planet through Facebook-owned tech products and services (such as Whatsapp, Instagram, Oculus, and more) within 100 years. To crush that iceberg, AI is the way.

Facebook has evolved as a platform enabling conversation and communication between people as a highly valuable source of knowing their lifestyle, interests, behavior patterns, and taste inside and out. What do individual users like? What don’t they like? This data — voluntarily provide but messily structured — can be utilized for profit at an exorbitant value.

That’s where AI comes in. AI enables machines to learn to clarify data, all by themselves. The simplest example of this would be AI image analysis identifying a dog, without telling that machine what a dog looks like. This begins to give structure to unstructured data. It quantifies it and reprints it in the form from which understandable insights can then be generated.

And that’s just the beginning. There are many use cases of how Facebook is revolutionizing its business through the use of Artificial Intelligence.

Analyzing Text

Believe it or not, a large amount of data shared on Facebook is still text. Videos are all the rage considering the high engagement and larger data volume in terms of megabytes, but the text provides better value. After all, a written explanation is always better than even a good video or image representing the same.

A brilliant tool used by Facebook is called Deeptext, which deciphers the meaning of the content posted to find the relative meaning. Facebook then generates leads with this tool by directing people to advertisers based on the conversations they are having. It offers user-related shopping links to connect chats and posts to potential interests.  

Mapping Population Density

Through the use of AI, Facebook is now working to map the world’s population density. The company revealed some details about this immersive technology back in 2016 when it created maps for 22 nations. Today, Facebook’s maps cover the majority of Africa and it won’t be long before the whole world’s population is mapped. With the help of satellite imagery and AI, this tedious task is getting completed. As per Facebook’s latest reveal — their all-new machine learning systems are faster and more efficient than originally released in 2016.

Rigorous evolution has made this possible — through on-the-ground and high-resolution satellite imagery. Facebook’s in-house teams and third-party partners have intensified their efforts in making it an unprecedented job. This is revolutionary work, but more than that it will have humanitarian benefits and applications. The data will be of humongous help for disaster relief and vaccination schemes.

Easy Translation

From an endless number of people operating Facebook all over the world, language has always been a barrier. This is simplified by Facebook's Artificial Intelligence-based automatic translation system. The Applied Machine Learning team helps 800 million people every month find preferred translated posts in their news feed. Since Facebook is all about human interactions, people fill their feeds with expressions and emotions. Hence, translation is crucial to social interactions on the site.

Chatbots

From automated subscription content like weather and traffic to customized communication like receipts, shipping notifications and live automated messages, using the site has become easier and more efficient with chatbots at our service. Facebook has a powerful and highly functional bot API for the Messenger platform that does three functions smoothly

  • Send/receive API. This API  is all about sending and receiving text, images and rich bubbles comprised of multiple calls-to-action. A welcome screen for threads can also be created.
  • Message template.  Facebook offers developers pre-made message templates which allow customers to tap buttons and see beautiful, template images. This is much easier than having to code a new programming language for bot interaction. Structured messages with call-to-actions are amazingly user-friendly.
  • Welcome screen. Offering a tool to customize your experience, the Messenger app is all about better communication and retrieving the result as needed. And the welcome screen initiates this journey. Here people discover chatbot features and initiate the conversation.

Caffe2go

Another feature utilizing artificial intelligence on Facebook is Caffe2go, which enables the Facebook app to transform video — not just photos — using machine learning in real-time by adding artsy touches — on your phone! Similar to Prisma, the feature is great for recording live videos and transforming it with creative effects that historically required sending video to a data center for processing. Caffe2go works offline and renders live. This technique is literally AI in the palm of your hands and provides everyone with state-of-the-art creative tools for expressing their creativity freely and in a flash.


         


Preventing Suicide

Around the world, suicide is the second leading cause of death for 15 to 29-year olds. Thankfully, Facebook can now help prevent suicides through the use of AI. AI can signal posts of people who might be in need and/or perhaps driven by suicidal tendencies. The AI uses machine learning to flag key phrases in posts and concerned comments from friends or family members to help identify users who may be at risk. Analyzing human nuance as a whole is quite complex, but AI is able to track it the context and understand what is a suicidal pattern and what isn’t. It’s great to see that Facebook and other social media sites are doing their part to help with this issue.  

Detecting Bad Content

The thorniest social media issues are always related to security and privacy. In addition to the already discussed, Facebook is using AI to detect content falling into seven main categories:  nudity, graphic violence, terrorism, hate speech, spam, fake account and suicide prevention. AI helps identify fake accounts created for malicious purposes and shuts them down instantly.

Hate speech is tricky stuff. Requiring the combined efforts of AI and the company’s community standards team, it is a tough nut to crack. It’s always difficult to track whether hate speech is actually there or if there is a nuance to be considered. That’s why the current scenario involves both AI automatically flagging potential hate speech along with follow-up manual review. In other areas, Facebook’s AI system relies on computer vision and raises a degree of confidence in order to determine whether or not to remove the content.

Summary

In a nutshell, AI is here to stay and is surely going to make a drastic impact in the way Facebook serves both users and advertisers. Although it has always remained tight-lipped about future inventions, Facebook is always utilizing technology to offer new features and services each year. With so many AI-based initiatives on board, Facebook is able to handle new challenges and explore new paths. After all, innovation has no end.

What to learn more about AI and how to help businesses integrate it into their service offerings? Then join the Vietnam AI Grand Challenge, a series of hackathons focused on Artificial Intelligence with the goal of building the Ultimate AI Virtual Assistant. The series is organized by Kambria with the support of the Vietnam Ministry of Science & Technology and the Ministry of Planning & Investment.

Participants will be guided and trained in AI technology through a series of educational workshops and will have the opportunity to work with leading AI experts in Vietnam and Silicon Valley. Winners will also have a chance to develop their project in a one-month incubation program before competing in Grand Finale in August.

Cheers

Way of Facebook Uses Artificial Intelligence

 

Model


Requirements:

  • Python3
  • TensorFlow
  • pip install -r requirements.txt

  Usage

$ python train.py
Hyperparameters
$ python train.py -h
usage: train.py [-h] [--embedding_size EMBEDDING_SIZE]
                [--num_layers NUM_LAYERS] [--num_hidden NUM_HIDDEN]
                [--keep_prob KEEP_PROB] [--learning_rate LEARNING_RATE]
                [--batch_size BATCH_SIZE] [--num_epochs NUM_EPOCHS]
                [--max_document_len MAX_DOCUMENT_LEN]

optional arguments:
  -h, --help            show this help message and exit
  --embedding_size EMBEDDING_SIZE
                        embedding size.
  --num_layers NUM_LAYERS
                        RNN network depth.
  --num_hidden NUM_HIDDEN
                        RNN network size.
  --keep_prob KEEP_PROB
                        dropout keep prob.
  --learning_rate LEARNING_RATE
                        learning rate.
  --batch_size BATCH_SIZE
                        batch size.
  --num_epochs NUM_EPOCHS
                        number of epochs.
  --max_document_len MAX_DOCUMENT_LEN
                        max document length.

Experimental Results

Language Model Training Loss



Text Classification Training Loss

Thanks & Cheers
Stack Exchanges

Multi-task Learning with TensorFlow

 


This is the second progress video about the port of openFrameworks to the Jetson.

The demonstration sketch shows that most of the OpenGL issues have been sorted out. Also, this is the first demo that includes a GLSL shader that is rendering the background. So some good progress is being made.

Two different openFrameworks add-ons are being demonstrated:

ofxTimeline and ofxUI .

ofxTimeline is being used to control the virtual camera movement. The timeline runs a 1-minute loop that controls the pan, zoom, and orbit of the camera. ofxUI provides the GUI element at the bottom left-hand corner which notates the camera position in a dynamic manner.

The depth information from the Kinect is being rendered in what is referred to as a point cloud. Basically, a 3D point mesh is constructed for each frame that is being displayed, and the color of each point is calculated from the color camera (RGB), then displayed. There is no CUDAof the process at this point, it’s just done on one of the ARM cores.

While there are still a few bugs hidden in the openFrameworks port, for the most part, everything is running smoothly.


Thanks & Cheers

For more stackexchanges-beta

Jetson TK1 Kinect Point Cloud in openFrameworks

 The Intel RealSense T265 Tracking Camera solves a fundamental problem in interfacing with the real world by helpfully answering “Where am I?” Looky here:



Background

One of the most important tasks in interfacing with the real world from a computer is to calculate your position in relationship to a map of the surrounding environment. When you do this dynamically, this is known as Simultaneous Localization And Mapping, or SLAM.

If you’ve been around the mobile robotics world at all (rovers, drones, cars), you probably have heard of this term. There are other applications too, such as Augmented Reality (AR) where a computing system must place the user precisely in the surrounding environment. Suffice it to say, it’s a foundational problem.

SLAM is a computational problem. How does a device construct or update a map of an unknown environment while simultaneously keeping track of its own location within that environment? People do this naturally in small places such as a house. At a larger scale, people have been clever enough to use visual navigational aids, such as the stars, to help build their maps.

This  V-SLAM solution does something very similar. Two fisheye cameras combine with the information from an  Inertial  Measurement  Unit (IMU) to navigate using visual features to track its way around even unknown environments with accuracy. 

Let’s just say that this is a non-trivial problem. If you have tried to implement this yourself, you know that it can be expensive and time consuming. The Intel RealSense T265 Tracking Camera provides precise and robust tracking that has been extensively tested in a variety of conditions and environments.

The T265 is a self-contained tracking system that plugs into a USB port. Install the librealsense SDK, and you can start streaming pose data right away.

Tech Stuffs

Here’s some tech specs:

Cameras

  • OV9282
  • Global Shutter, Fisheye Field of View = 163 degrees
  • Fixed Focus, Infrared Cut Filter
  • 848 x 800 resolution
  • 30 frames per second

Inertial Measurement Unit (IMU)

  • 6 Degrees of Freedom (6 DoF)
  • Accelerometer 
  • Gyroscope

Visual Processing Unit (VPU)

  • Movidius MA215x ASIC (Application Specific Integrated Circuit)

The Power Requirement is 300 mA at 5V (!!!). The package is 108mm Wide x 24.5mm High x 12.50mm Deep. The camera weighs 60 grams.

Installation

To interface with the camera,  Intel provides the open source library librealsense. On the JetsonHacksNano account on Github, there is a repository named installLibrealsense. The repository contains convenience scripts to install librealsense.

Note: Starting with L4T 32.2.1/JetPack 4.2.2 a swap file is now part of the default install. You do not need to create a swap file if you are using this release or later. Skip the following step if using 32.2.1 or above.

In order to use the install script, you will either need to create a swapfile to ease an out of memory issue, or modify the install script to run less jobs during the make process. In the video, we chose the swapfile route. To install the swapfile:

$ git clone https://github.com/jetsonhacksnano/installSwapfile
$ cd installSwapfile
$ ./installSwapfile.sh
$ cd ..

You’re now ready to install librealsense.

$ git clone https://github.com/jetsonhacksnano/installLibrealsense
$ cd installLibrealsense
$ ./installLibrealsense.sh

While the installLibrealsense.sh script has the option to compile the librealsense with CUDA support, we do not select that option. If you are using the T265 alone, there is no advantage in using CUDA, as the librealsense CUDA routines only convert images from the RealSense Depth cameras (D415, D435 and so on).

The location of librealsense SDK products:

  • The library is installed in /usr/local/lib
  • The header files are in /usr/local/include
  • The demos and tools are located in /usr/local/bin

Go to the demos and tools directory, and checkout the realsense-viewer application and all of the different demonstrations!

Conclusion

The Intel RealSense T265 is a powerful tool for use in robotics and augmented/virtual reality. Well worth checking out!

Notes

  • Tested on Jetson Nano L4T 32.1.0
  • If you have a mobile robot, you can send wheel odometry to the RealSense T265 through the librealsense SDK for better accuracy. The details are still being worked out.

Thanks, Cheers.

Jetson Nano – RealSense Tracking Camera

 Many people use Intel RealSense cameras with robots. Here we install the real sense-ros wrapper on the NVIDIA Jetson Nano developer kit. Looky here:




Background

There are several members in the Intel RealSense camera family. This includes the Depth Cameras (D415, D435, D435i) and Tracking Camera (T265). There are also more recent introductions which are just becoming available.

The cameras all share the same Intel® RealSense™ SDK which is known as librealsense2. The SDK is open source and available on Github. We have articles for installing librealsense (D400x article and T265 article) here on the JetsonHacks site.

The size and weight of the cameras make them very good candidates for robotic applications. Computing hardware onboard the cameras provide depth and tracking information directly, which makes it a very attractive addition to a Jetson Nano. Plus the cameras have low power consumption. Because ROS is the most popular middleware application for robotics, here’s how you install realsense-ros on the Jetson Nano.

Install RealSense Wrapper for ROS

There are two prerequisites for installing realsense-ros on the Jetson Nano. The first is to install librealsense as linked above. The second prerequisite is a ROS installation. Checkout Install ROS on Jetson Nano for a how-to on installing ROS Melodic on the Nano.

With the two prerequisites out of the way, it’s time to install realsense-ros. There are convenience scripts to install the RealSense ROS Wrapper on the Github JetsonHacksNano account.

$ git clone https://github.com/JetsonHacksNano/installRealSenseROS
$ cd installRealSenseROS
$ ./installRealSenseROS <catkin workplace name>

Where catkin workplace name is the path to the catkin_workspace to place the RealSense ROS package. If no catkin workplace name is specified, the script defaults to ~/catkin_ws.

Note: Version are in the releases section. The master branch of the repository will usually match the most recent release release of L4T, but you may have to look through the releases for a suitable version. To checkout one of the releases, switch to the installRealSenseROS directory and then:

$ git checkout <version number>

e.g.

$ git checkout vL4T32.2.1

The ROS launch file for the camera(s) will be in the src directory of the Catkin workspace realsense-ros/realsense2_camera/launch There are a variety of launch files to choose from. For example:

$ roslaunch realsense2_camera rs_camera.launch

You will need to make sure that your Catkin workspace is correctly sourced, and roscore is running.

Notes

There are dependencies between the versions of librealsense and realsense-ros. The install scripts are also dependent on the version of L4T. Check the releases on the Github accounts to match.

In the video:

  • Jetson Nano
  • L4T 32.2.1 / JetPack 4.2.2
  • librealsense 2.25.0
  • realsense-ros 2.28.0

realsense-ros does not “officially” support ROS Melodic. However, we haven’t encountered any issues as of the time of this writing.

Thanks Cheers

RealSense ROS Wrapper – Jetson Nano