Halaman

    Social Items

Showing posts with label python. Show all posts
Showing posts with label python. Show all posts

 

Clear Memory in Python

  1. 1. Clear Memory in Python Using the gc.collect() Method
  2. 2. Clear Memory in Python Using the del Statement

This tutorial will look into the methods to free or clear memory in Python during the program execution. When a program has to deal with large files, process a large amount of data, or keep the data in the memory. In these types of scenarios, the program can often run out of memory.

To prevent the program from running out of memory, we have to free or clear the memory by clearing the variable or data, which is no more needed in the program. We can clear the memory in Python using the following methods.

Clear Memory in Python Using the gc.collect() Method

The gc.collect(generation=2) method is used to clear or release the unreferenced memory in Python. The unreferenced memory is the memory that is inaccessible and can not be used. The optional argument generation is an integer whose value ranges from 0 to 2. It specifies the generation of the objects to collect using the gc.collect() method.

In Python, the short-lived objects are stored in generation 0 and objects with a longer lifetime are stored in generation 1 or 2. The list maintained by the garbage collector is cleared whenever the gc.collect() with default generation value equal to 2 is called.

The gc.collect() method can help decrease memory usage and clear the unreferenced memory during the program execution. It can prevent the program from running out of memory and crashing by clearing the memory’s inaccessible data.

Clear Memory in Python Using the del Statement

Along with the gc.collect() method, the del statement can be quite useful to clear memory during Python’s program execution. The del statement is used to delete the variable in Python. We can first delete the variable like some large list, or array, etc., about which we are sure that are no more required by the program.

The below example code demonstrates how to use the del statement to delete the variable.

import numpy as np

a= np.array([1,2,3])
del a

Suppose we try to use or access the variable after deleting it. In that case, the program will return the NameError exception as the variable we are trying to access no more exists in the variable namespace.

Example code:

import numpy as np

a= np.array([1,2,3])
del a
print(a)

Output:

NameError: name 'a' is not defined

The del statement removes the variable from the namespace, but it does not necessarily clear it from memory. Therefore, after deleting the variable using the del statement, we can use the gc.collect() method to clear the variable from memory.

The below example code demonstrates how to use the del statement with the gc.collect() method to clear the memory in Python.

import numpy as np
import gc

a = np.array([1,2,3])
del a
gc.collect()

Diagnosing and Fixing Memory Leaks in Python

 

Model


Requirements:

  • Python3
  • TensorFlow
  • pip install -r requirements.txt

  Usage

$ python train.py
Hyperparameters
$ python train.py -h
usage: train.py [-h] [--embedding_size EMBEDDING_SIZE]
                [--num_layers NUM_LAYERS] [--num_hidden NUM_HIDDEN]
                [--keep_prob KEEP_PROB] [--learning_rate LEARNING_RATE]
                [--batch_size BATCH_SIZE] [--num_epochs NUM_EPOCHS]
                [--max_document_len MAX_DOCUMENT_LEN]

optional arguments:
  -h, --help            show this help message and exit
  --embedding_size EMBEDDING_SIZE
                        embedding size.
  --num_layers NUM_LAYERS
                        RNN network depth.
  --num_hidden NUM_HIDDEN
                        RNN network size.
  --keep_prob KEEP_PROB
                        dropout keep prob.
  --learning_rate LEARNING_RATE
                        learning rate.
  --batch_size BATCH_SIZE
                        batch size.
  --num_epochs NUM_EPOCHS
                        number of epochs.
  --max_document_len MAX_DOCUMENT_LEN
                        max document length.

Experimental Results

Language Model Training Loss



Text Classification Training Loss

Thanks & Cheers
Stack Exchanges

Multi-task Learning with TensorFlow

 The Intel RealSense T265 Tracking Camera solves a fundamental problem in interfacing with the real world by helpfully answering “Where am I?” Looky here:



Background

One of the most important tasks in interfacing with the real world from a computer is to calculate your position in relationship to a map of the surrounding environment. When you do this dynamically, this is known as Simultaneous Localization And Mapping, or SLAM.

If you’ve been around the mobile robotics world at all (rovers, drones, cars), you probably have heard of this term. There are other applications too, such as Augmented Reality (AR) where a computing system must place the user precisely in the surrounding environment. Suffice it to say, it’s a foundational problem.

SLAM is a computational problem. How does a device construct or update a map of an unknown environment while simultaneously keeping track of its own location within that environment? People do this naturally in small places such as a house. At a larger scale, people have been clever enough to use visual navigational aids, such as the stars, to help build their maps.

This  V-SLAM solution does something very similar. Two fisheye cameras combine with the information from an  Inertial  Measurement  Unit (IMU) to navigate using visual features to track its way around even unknown environments with accuracy. 

Let’s just say that this is a non-trivial problem. If you have tried to implement this yourself, you know that it can be expensive and time consuming. The Intel RealSense T265 Tracking Camera provides precise and robust tracking that has been extensively tested in a variety of conditions and environments.

The T265 is a self-contained tracking system that plugs into a USB port. Install the librealsense SDK, and you can start streaming pose data right away.

Tech Stuffs

Here’s some tech specs:

Cameras

  • OV9282
  • Global Shutter, Fisheye Field of View = 163 degrees
  • Fixed Focus, Infrared Cut Filter
  • 848 x 800 resolution
  • 30 frames per second

Inertial Measurement Unit (IMU)

  • 6 Degrees of Freedom (6 DoF)
  • Accelerometer 
  • Gyroscope

Visual Processing Unit (VPU)

  • Movidius MA215x ASIC (Application Specific Integrated Circuit)

The Power Requirement is 300 mA at 5V (!!!). The package is 108mm Wide x 24.5mm High x 12.50mm Deep. The camera weighs 60 grams.

Installation

To interface with the camera,  Intel provides the open source library librealsense. On the JetsonHacksNano account on Github, there is a repository named installLibrealsense. The repository contains convenience scripts to install librealsense.

Note: Starting with L4T 32.2.1/JetPack 4.2.2 a swap file is now part of the default install. You do not need to create a swap file if you are using this release or later. Skip the following step if using 32.2.1 or above.

In order to use the install script, you will either need to create a swapfile to ease an out of memory issue, or modify the install script to run less jobs during the make process. In the video, we chose the swapfile route. To install the swapfile:

$ git clone https://github.com/jetsonhacksnano/installSwapfile
$ cd installSwapfile
$ ./installSwapfile.sh
$ cd ..

You’re now ready to install librealsense.

$ git clone https://github.com/jetsonhacksnano/installLibrealsense
$ cd installLibrealsense
$ ./installLibrealsense.sh

While the installLibrealsense.sh script has the option to compile the librealsense with CUDA support, we do not select that option. If you are using the T265 alone, there is no advantage in using CUDA, as the librealsense CUDA routines only convert images from the RealSense Depth cameras (D415, D435 and so on).

The location of librealsense SDK products:

  • The library is installed in /usr/local/lib
  • The header files are in /usr/local/include
  • The demos and tools are located in /usr/local/bin

Go to the demos and tools directory, and checkout the realsense-viewer application and all of the different demonstrations!

Conclusion

The Intel RealSense T265 is a powerful tool for use in robotics and augmented/virtual reality. Well worth checking out!

Notes

  • Tested on Jetson Nano L4T 32.1.0
  • If you have a mobile robot, you can send wheel odometry to the RealSense T265 through the librealsense SDK for better accuracy. The details are still being worked out.

Thanks, Cheers.

Jetson Nano – RealSense Tracking Camera

 Many people use Intel RealSense cameras with robots. Here we install the real sense-ros wrapper on the NVIDIA Jetson Nano developer kit. Looky here:




Background

There are several members in the Intel RealSense camera family. This includes the Depth Cameras (D415, D435, D435i) and Tracking Camera (T265). There are also more recent introductions which are just becoming available.

The cameras all share the same Intel® RealSense™ SDK which is known as librealsense2. The SDK is open source and available on Github. We have articles for installing librealsense (D400x article and T265 article) here on the JetsonHacks site.

The size and weight of the cameras make them very good candidates for robotic applications. Computing hardware onboard the cameras provide depth and tracking information directly, which makes it a very attractive addition to a Jetson Nano. Plus the cameras have low power consumption. Because ROS is the most popular middleware application for robotics, here’s how you install realsense-ros on the Jetson Nano.

Install RealSense Wrapper for ROS

There are two prerequisites for installing realsense-ros on the Jetson Nano. The first is to install librealsense as linked above. The second prerequisite is a ROS installation. Checkout Install ROS on Jetson Nano for a how-to on installing ROS Melodic on the Nano.

With the two prerequisites out of the way, it’s time to install realsense-ros. There are convenience scripts to install the RealSense ROS Wrapper on the Github JetsonHacksNano account.

$ git clone https://github.com/JetsonHacksNano/installRealSenseROS
$ cd installRealSenseROS
$ ./installRealSenseROS <catkin workplace name>

Where catkin workplace name is the path to the catkin_workspace to place the RealSense ROS package. If no catkin workplace name is specified, the script defaults to ~/catkin_ws.

Note: Version are in the releases section. The master branch of the repository will usually match the most recent release release of L4T, but you may have to look through the releases for a suitable version. To checkout one of the releases, switch to the installRealSenseROS directory and then:

$ git checkout <version number>

e.g.

$ git checkout vL4T32.2.1

The ROS launch file for the camera(s) will be in the src directory of the Catkin workspace realsense-ros/realsense2_camera/launch There are a variety of launch files to choose from. For example:

$ roslaunch realsense2_camera rs_camera.launch

You will need to make sure that your Catkin workspace is correctly sourced, and roscore is running.

Notes

There are dependencies between the versions of librealsense and realsense-ros. The install scripts are also dependent on the version of L4T. Check the releases on the Github accounts to match.

In the video:

  • Jetson Nano
  • L4T 32.2.1 / JetPack 4.2.2
  • librealsense 2.25.0
  • realsense-ros 2.28.0

realsense-ros does not “officially” support ROS Melodic. However, we haven’t encountered any issues as of the time of this writing.

Thanks Cheers

RealSense ROS Wrapper – Jetson Nano

 The course MIT 6.S094: Deep Learning for Self-Driving Cars is currently in session. Course instructor Dr. Lex Fridman states that “Our goal is to release 1 lecture every other day until all 20 lectures and guest talks are out. 

It’s important to me to make this course free and open to everyone.”

Deep Learning for Self-Driving Cars

Here’s the course blurb:

This class is an introduction to the practice of deep learning through the applied theme of building a self-driving car. It is open to beginners and is designed for those who are new to machine learning, but it can also benefit advanced researchers in the field looking for a practical overview of deep learning methods and their application.

The best part is that the slides and lecture videos are online, usually available a few days after the lecture is given. Here’s the playlist:


Artificial General Intelligence

Another class being taught by Dr. Fridman right now is MIT 6.S099: Artificial General Intelligence. Here’s the class blurb:

This class takes an engineering approach to exploring possible research paths toward building human-level intelligence. The lectures will introduce our current understanding of computational intelligence and ways in which strong AI could possibly be achieved, with insights from deep learning, reinforcement learning, computational neuroscience, robotics, cognitive modeling, psychology, and more. Additional topics will include AI safety and ethics. Projects will seek to build intuition about the limitations of state-of-the-art machine learning approaches and how those limitations may be overcome. The course will include several guest talks. Listeners are welcome.



This is well worth the time investment if either of these areas interests you.

Thanks Cheers

Deep Learning for Self-Driving Cars

Recognizing the environment in one glance is one of the human brain’s most accomplished deeds. While the tremendous recent progress in object recognition tasks originates from the availability of large datasets such as COCO and the rise of Convolution Neural Networks ( CNNs) to learn high-level features, scene recognition performance has not achieved the same level of success.

In this blog post, we will see how classification models perform on classifying images of a scene. For this task, we have taken the Places365-Standard dataset to train the model. This dataset has 1,803,460 training images and 365 classes with the image number per class varying from 3,068 to 5,000 and the size of images is 256*256.

Installing and Downloading the data

!git clone https://github.com/Tessellate-Imaging/monk_v1.git
! cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt

After installing the dependencies, I downloaded the Places365-Standard dataset which is available to download from here.

Create an Experiment

I have created an experiment, and for this task, I used mxnet gluon back-end.

import os
import sys
sys.path.append("monk_v1/monk/");
from gluon_prototype import prototype
gtf = prototype(verbose=1);

gtf.Prototype("Places_365", "Experiment");

 

Model Selection and Training

gtf.Default(dataset_path="train/",
path_to_csv="labels.csv",
model_name="vgg16",
freeze_base_network=False,
num_epochs=20);
gtf.Train();

Prediction

gtf = prototype(verbose=1);
gtf.Prototype("Places_365", "Experiment", eval_infer=True);
img_name = "test_256/Places365_test_00208427.jpg"
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)

img_name = "test_256/Places365_test_00151496.jpg" 
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)
Prediction on test images
Wrong Image in baseball_field
img=mpimg.imread(“images/train/baseball_field2469.jpg”)
imgplot = plt.imshow(img)
Label: field_road
Label: forest_road

Natural Scene Recognition Using Deep Learning