Monday, October 2, 2017

Isro Answer Key 2017

Please find the below answer key for ISRO 2017 .

Also this answer for SET A .

1 (b)
2 (d)
3 (d)
4 (b)
5 (c)
6 (a)
7 (c)
8 (d)
9 (b)
10 (c)
11 (a)
12 (c)
13 (c)
14 (c)
15 (a)
16 (c)
17 (d)
18 (d)
19 (b)
20 (d)
21 (b)
22 (d)
23 (b)
24 (c)
25 (b)
26 (c)
27 (d)
28 (d)
29 (c)
30 (d)
31 (c)
32 (d)
33 (a)
34 (a)
35 (d)
36 (a)
37 (d)
38 (c)
39 (c)
40 (c)
41 (a)
42 (d)
43 (b)
44 (d)
45 (b)
46 (d)
47 (d)
48 (d)
49 (d)
50 (c)
51 (a)
52 (a)
53 (c)
54 (d)
55 (c)
56 (d)
57 (c)
58 (c)
59 (b)
60 (c)
61 (d)
62 (c)
63 (a)
64 (a)
65 (b)
66 (b)
67 (b)
68 (c)
69 (a)
70 (d)
71 (a)
72 (b)
73 (c)
74 (d)
75 (c)
76 (c)
77 (b)
78 (b)
79 (a)
80 (d)

Isro Answer Key 2016


Please find the below answer key for ISRO 2016 for Question Set A:


Question_No Answer Key
1 B
2 B
3 B
4 A
5 A
6 C
7 C
8 C
9 B
10 A
11 C
12 C
13 D
14 A
15 C
16 B
17 B
18 D
19 C
20 B
21 C
22 A
23 B
24 B
25 A
26 C
27 D
28 C
29 C
30 B
31 A
32 B
33 B
34 A
35 A
36 C
37 C
38 A
39 A
40 C
41 C
42 A
43 C
44 C
45 C
46 D
47 A
48 B
49 C
50 B
51 C
52 A
53 A
54 B
55 D
56 A
57 A
58 A
59 C
60 B
61 A
62 A
63 B
64 D
65 A
66 A
67 C
68 A
69 B
70 A
71 A
72 B
73 A
74 C
75 A
76 A
77 B
78 B
79 D
80 D

Wednesday, September 13, 2017

Machine Learning - Linear regression using python

Hello friends ,

This is my first post on machine learning .

Lets talk about the problem set first . The problem set is to predict the price of the automobile car .
The dataset in mentioned in the below link.
https://archive.ics.uci.edu/ml/datasets/Automobile

The Dataset description is mentioned in below url .
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.names

In order to solve any machine learning Algorithm it needs lot of steps , below are some steps in any model building process.


Data Collection :
So as part of data collection , we will get the data from the below url.
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data

Data Preparation and Cleaning:
This step is one of the most important step in building any ML model .
There are some common steps listed below and there can be many more .
  1. Adding Header to data :As header is not mentioned for the dataset , we will be adding the corresponding header to each and every columns . If it is mentioned  it is well and good and we can skip this step.                                                                                                                       
  2. Removing Spaces :Some of the categorical data will have whitespaces , which will some time cause issue in comparison of data down the line , so we will be eliminating those whitespaces.
  3. Impute values : Definition of "Imputed value" , the value of an item for which actual values are not available. In our dataset , you will find lot of "?" , which are missing values .This step has two sub steps as follows.
  •    Replace Imputed Values : For all "?" in dataset, we will be removing with python NaN. As we can observe 41 rows of normalized-losses, 2 rows of num-of-doors, 4 rows of bore,  4 rows of stroke, 2 row of horsepower, 2 rows peak-rpm and finally 4 rows of price are missing . 

  •    Replace Nan with values  : So now will be trying to assign the corresponding values these rows of data for equivalent columns. There are lot of ways to do this, one common which i find is to replace the numeric data with mean and categorical data with mode.   Note:    Sometimes people opt to remove these rows and then built the model. Again, it up to the accuracy of the model which will speak. 
           

     4. Encoding: Encoding is the technique of replacing the categorical data with quantitive(numeric) value . This is needed because most of the algorithm does not handle the categorical data , it only work with numeric data. Again this step has various sub steps.

  •    Find and Replace  :  Sometimes data in dataset are categorical but it means numeric. For example in our case "num-of-doors" ,"num-of-cylinders", though it is categorical but we can replace with equivalent numbers.

  •    Label Encoding  : Label encoding is simply converting each value in a column to a number.    For   example "make" has 22 manufacturers, we will assign numbers from 1-22 to these manufacturers. i.e.
    alfa-romero -->1                                                                                   audi -->2 .

  •    Hot Label Encoding  :Label encoding has the advantage that it is straightforward but it the disadvantage that the numeric values can be “ misinterpreted” by the algorithms. A common alternative approach is called one hot encoding. The basic strategy is to convert each category value into a new column and assigns a 1 or 0 (True/False) value to the column.    This has the benefit of not   weighting a value improperly but does have the downside of adding more columns to the data set. Pandas supports this feature using get dummies. This function is named this way because it creates dummy/indicator variables (aka 1 or 0).



Data Visualization :
This steps will help us to understand the relationship between various attributes . Some of the sample plots are shown below.




Now as we can see after encoding fuel-type is having only two values. Also another realization
is there is very few automobile having engine located at "rear". For the last figure , we can see
that majority of vehicles are having wheel base between 95-110.  Similarly we can plot against other
attributes and visualize the same.

Feature Engineering:
This is one of the key aspect of making your model accurate .Again it has various steps of analysis as 
follows.  

  •        Feature -Target Correlation:  As part of this we will trying to analysis the relationship between price (Target) and other attributes column and look at the correlation between them. If we will able to find any column which is least correlated, we can drop that column and built our model.
  •         Feature -Feature Correlation:  Now there may be features(columns) which may be redundant. So if we look at the correlation plot for these columns it would be highly correlated and we can eliminate these kinds of columns so that our model will not over or under fit.
  •         Quadratic Features(Binding of two features):   In this technique we will be clubbing two features and come up as one column and use that in our model .                                                                                                                        

Model Building:
In this step we will be building the model but before we go ahead and built it , we will split the
dataset into training and test data . Now this is because our model will learn from the training dataset
and run the prediction against the test data , for which we already know the expected output.
So in this way we will conclude on our accuracy of the model.
On general the split ratio is 70-30 , but in my case as dataset is very small i have taken 80-20 (80% 
training data and 20 % test data).

Finally we will be building the model and running the prediction.

Whole source code for the above exercise is upload in below link.

https://github.com/Hariomsingh2007/Harrycodehub/blob/master/Linear_Regression.py



















Monday, July 31, 2017

Text to Audio Converter

Hello Everyone , i am writing this piece  of code to convert the text to audio.

Below are module needs to be installed .

1.gtts
2.pygame

Note : For Mac , please download 1.9.2 incase you are facing any issue with the
latest package.

And below code will ask for text input from user and it converts text to audio (chatBot.wav) file and
save it to local directory .
Once file is saved it will simply play the audio file.







Sunday, July 9, 2017

Audio Controlled Bot : Jarvis


Today i will showing you one of my work , which i am recently working on .

I have named it Jarvis which i got it from movie Iron Man .

So basically it is Audio Based search and PC control .


I will be sharing the source code later , once i will have final version Jarvis completed .

So for code point of view i have been using speech_recognition , requests, selenium  module
of python.







Thursday, January 12, 2017

Connecting Oracle Database using Python(Cx_Oracle)

Hello Friends , today i will be telling you how to connect to oracle data base using python .

We will be running some queries and also tries to explore other options like insert, running anonymous block etc.

So in order to connect to Oracle data base we need Cx_Oracle module .(Download)

Make sure that you have one entry in your registry for python also the version of cx_Oracle should be compatible with your system.

Lets see a sample implementation.

I have created a table friends in my oracle database(localhost).And i would be trying to connect and retrieve this data using python.












The result will  be of type tuple , so in order to get the specific result we can use the tuple index to get the data. For eg values[0] will print the name of the friends.


Monday, January 2, 2017

Live twitter feed.

Hello friends , today we will be working with Twitter API .

I will be showing you that how you can use twitter API to get the live feeds to any thing from twitter.

In order to access the twitter api we need its module which is already available either you can pip install this using below command

pip install tweepy

Or you can download from python library.

But in order to use the Api to need to have four keys which you can get by registering your self to the twitter developer page.

consumer_key
consumer_secret
access_token
access_token_secret


On the above code i am continuously monitoring the TimesNow. If you are interested to follow something else please change the same.

Sample feed which i have generated.