kandi background
Explore Kits

handson-ml2 | Jupyter notebooks that walk you through the fundamentals | Machine Learning library

 by   ageron Jupyter Notebook Version: Current License: Apache-2.0

 by   ageron Jupyter Notebook Version: Current License: Apache-2.0

Download this library from

kandi X-RAY | handson-ml2 Summary

handson-ml2 is a Jupyter Notebook library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Tensorflow, Keras, Jupyter, Pandas applications. handson-ml2 has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.
This project aims at teaching you the fundamentals of Machine Learning in python. It contains the example code and solutions to the exercises in the second edition of my O’Reilly book [Hands-on Machine Learning with Scikit-Learn, Keras and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/):. <img src="https://images-na.ssl-images-amazon.com/images/I/51aqYc1QyrL.SX379_BO1,204,203,200.jpg" title="book" width="150" />. Note: If you are looking for the first edition notebooks, check out [ageron/handson-ml](https://github.com/ageron/handson-ml).
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • handson-ml2 has a medium active ecosystem.
  • It has 16775 star(s) with 7998 fork(s). There are 546 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 130 open issues and 237 have been closed. On average issues are closed in 49 days. There are 2 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of handson-ml2 is current.
handson-ml2 Support
Best in #Machine Learning
Average in #Machine Learning
handson-ml2 Support
Best in #Machine Learning
Average in #Machine Learning

quality kandi Quality

  • handson-ml2 has 0 bugs and 0 code smells.
handson-ml2 Quality
Best in #Machine Learning
Average in #Machine Learning
handson-ml2 Quality
Best in #Machine Learning
Average in #Machine Learning

securitySecurity

  • handson-ml2 has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • handson-ml2 code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
handson-ml2 Security
Best in #Machine Learning
Average in #Machine Learning
handson-ml2 Security
Best in #Machine Learning
Average in #Machine Learning

license License

  • handson-ml2 is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
handson-ml2 License
Best in #Machine Learning
Average in #Machine Learning
handson-ml2 License
Best in #Machine Learning
Average in #Machine Learning

buildReuse

  • handson-ml2 releases are not available. You will need to build from source code and install.
  • Installation instructions, examples and code snippets are available.
  • It has 13 lines of code, 1 functions and 1 files.
  • It has low code complexity. Code complexity directly impacts maintainability of the code.
handson-ml2 Reuse
Best in #Machine Learning
Average in #Machine Learning
handson-ml2 Reuse
Best in #Machine Learning
Average in #Machine Learning
Top functions reviewed by kandi - BETA

kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here

Get all kandi verified functions for this library.

Get all kandi verified functions for this library.

handson-ml2 Key Features

A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.

Want to install this project on your own machine?

copy iconCopydownload iconDownload
$ git clone https://github.com/ageron/handson-ml2.git
$ cd handson-ml2

Custom environment using TFagents

copy iconCopydownload iconDownload
self._action_spec = tf_agents.specs.BoundedArraySpec(shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
self._observation_spec = tf_agents.specs.BoundedArraySpec(shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)
env= MyEnvironment()
tf_env = tf_agents.environments.tf_py_environment.TFPyEnvironment(env)
self._action_spec = tf_agents.specs.BoundedArraySpec(shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
self._observation_spec = tf_agents.specs.BoundedArraySpec(shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)
env= MyEnvironment()
tf_env = tf_agents.environments.tf_py_environment.TFPyEnvironment(env)

What does urllib.request.urlretrieve do if not returned

copy iconCopydownload iconDownload
   tgz_path = os.path.join(housing_path, "housing.tgz") #<--- is the path directory

  # takes 2 parameters the url and the file path to save the content 
      urllib.request.urlretrieve( housing_url, tgz_path) 


  [1]: https://docs.python.org/3/library/urllib.request.html
In [1]: import urllib.request                                                                                                                       

In [2]: urllib.request.urlretrieve('http://python.org/', 'test.python')                                                                             
Out[2]: ('test.python', <http.client.HTTPMessage at 0x108d22390>)

Why shuffling the data like this leads to a poor accuracy

copy iconCopydownload iconDownload
X_train, y_train = shuffle(X_train, y_train)

Python 3.8.3 &quot;File-not-found&quot; message

copy iconCopydownload iconDownload
fetch_housing_data()

Error in using DataFrameMapper() for PolynomialFeature() in sklearn

copy iconCopydownload iconDownload
import pandas as pd
from sklearn.preprocessing import PolynomialFeatures
from sklearn_pandas import DataFrameMapper

# load data
df = pd.read_csv('https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv')

# create houseAge_income
df['houseAge_income'] = df.housing_median_age.mul(df.median_income)

# configure mapper with all columns passed as lists
mapper = DataFrameMapper([(['houseAge_income'], PolynomialFeatures(2)),
                          (['median_income'], PolynomialFeatures(2)),
                          (['latitude', 'housing_median_age', 'total_rooms', 'population', 'median_house_value', 'ocean_proximity'], None)])

# fit
poly_feature = mapper.fit_transform(df)

# display(pd.DataFrame(poly_feature).head())
  0       1           2  3       4       5      6   7     8     9          10        11
0  1  341.33  1.1651e+05  1  8.3252  69.309  37.88  41   880   322  4.526e+05  NEAR BAY
1  1  174.33       30391  1  8.3014  68.913  37.86  21  7099  2401  3.585e+05  NEAR BAY
2  1  377.38  1.4242e+05  1  7.2574   52.67  37.85  52  1467   496  3.521e+05  NEAR BAY
3  1  293.44       86108  1  5.6431  31.845  37.85  52  1274   558  3.413e+05  NEAR BAY
4  1     200       40001  1  3.8462  14.793  37.85  52  1627   565  3.422e+05  NEAR BAY

Powershell set a variable and print but nothing

copy iconCopydownload iconDownload
$ML_PATH = "E:\Workspace\Handson-ml2"
Write-Output $ML_PATH
Get-Alias echo

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Alias           echo -> Write-Output
$ML_PATH = "E:\Workspace\Handson-ml2"
Write-Output $ML_PATH
Get-Alias echo

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Alias           echo -> Write-Output

Installing Tensorflow 2 gets a dll failed to load in pywrap_tensorflow.py

copy iconCopydownload iconDownload
$ pip install --user pipenv
C:\Users\XXXXX\AppData\Local\Programs\Python\Python38\Scripts\pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
pip.exe -V
pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\lib
$ pip install --user pipenv
C:\Users\XXXXX\AppData\Local\Programs\Python\Python38\Scripts\pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
pip.exe -V
pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\lib
$ pip install --user pipenv
C:\Users\XXXXX\AppData\Local\Programs\Python\Python38\Scripts\pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
pip.exe -V
pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\lib
$ pip install --user pipenv
C:\Users\XXXXX\AppData\Local\Programs\Python\Python38\Scripts\pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
pip.exe -V
pip.exe install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.y\lib

How to execute only particular part of the scikit-learn pipeline?

copy iconCopydownload iconDownload
from sklearn.pipeline import FeatureUnion, Pipeline

prepare_select_pipeline = Pipeline([
    ('preparation', full_pipeline),
    ('feature_selection', TopFeatureSelector(feature_importances, k))
])

feats = FeatureUnion([('prepare_and_select', prepare_select_pipeline)])

prepare_select_and_predict_pipeline = Pipeline([('feats', feats),
                               ('svm_reg', SVR(**rnd_search.best_params_))])
prepare_select_and_predict_pipeline[:-1].fit_transform(housing)

Pipeline error (ValueError: Specifying the columns using strings is only supported for pandas DataFrames)

copy iconCopydownload iconDownload
full_pipeline_with_predictor = Pipeline([
        ("preparation", full_pipeline),
        ("linear", LinearRegression())
    ])
final_predictions = full_pipeline_with_predictor.predict(X_test)
full_pipeline_with_predictor.predict(some_data)
full_pipeline_with_predictor = Pipeline([
        ("preparation", full_pipeline),
        ("linear", LinearRegression())
    ])
final_predictions = full_pipeline_with_predictor.predict(X_test)
full_pipeline_with_predictor.predict(some_data)
full_pipeline_with_predictor = Pipeline([
        ("preparation", full_pipeline),
        ("linear", LinearRegression())
    ])
final_predictions = full_pipeline_with_predictor.predict(X_test)
full_pipeline_with_predictor.predict(some_data)

Explanation of grid search CV parameter grid for pipeline

copy iconCopydownload iconDownload
list(range(1, len(feature_importances) + 1))
[1, 2, 3, 4, 5]
list(range(1, len(feature_importances) + 1))
[1, 2, 3, 4, 5]
prepare_select_and_predict_pipeline = Pipeline([
    ('preparation', full_pipeline),
    ('feature_selection', TopFeatureSelector(feature_importances, k)),
    ('svm_reg', SVR(**rnd_search.best_params_))
])
full_pipeline = ColumnTransformer([
        ("num", num_pipeline, num_attribs),
        ("cat", OneHotEncoder(), cat_attribs),
    ])
num_pipeline = Pipeline([
        ('imputer', SimpleImputer(strategy="median")),
        ('attribs_adder', CombinedAttributesAdder()),
        ('std_scaler', StandardScaler()),
    ])
prepare_select_and_predict_pipeline = Pipeline([
    ('preparation', full_pipeline),
    ('feature_selection', TopFeatureSelector(feature_importances, k)),
    ('svm_reg', SVR(**rnd_search.best_params_))
])
full_pipeline = ColumnTransformer([
        ("num", num_pipeline, num_attribs),
        ("cat", OneHotEncoder(), cat_attribs),
    ])
num_pipeline = Pipeline([
        ('imputer', SimpleImputer(strategy="median")),
        ('attribs_adder', CombinedAttributesAdder()),
        ('std_scaler', StandardScaler()),
    ])
prepare_select_and_predict_pipeline = Pipeline([
    ('preparation', full_pipeline),
    ('feature_selection', TopFeatureSelector(feature_importances, k)),
    ('svm_reg', SVR(**rnd_search.best_params_))
])
full_pipeline = ColumnTransformer([
        ("num", num_pipeline, num_attribs),
        ("cat", OneHotEncoder(), cat_attribs),
    ])
num_pipeline = Pipeline([
        ('imputer', SimpleImputer(strategy="median")),
        ('attribs_adder', CombinedAttributesAdder()),
        ('std_scaler', StandardScaler()),
    ])

Community Discussions

Trending Discussions on handson-ml2
  • Custom environment using TFagents
  • What does urllib.request.urlretrieve do if not returned
  • Understanding and Evaluating different methods in Reinforcement Learning
  • Why shuffling the data like this leads to a poor accuracy
  • Python 3.8.3 &quot;File-not-found&quot; message
  • Error in using DataFrameMapper() for PolynomialFeature() in sklearn
  • MemoryError: Unable to allocate MiB for an array with shape and data type, when using anymodel.fit() in sklearn
  • Powershell set a variable and print but nothing
  • Installing Tensorflow 2 gets a dll failed to load in pywrap_tensorflow.py
  • How to execute only particular part of the scikit-learn pipeline?
Trending Discussions on handson-ml2

QUESTION

Custom environment using TFagents

Asked 2021-Jun-02 at 22:36

I am trying to learn a custom environment using the TFAgents package. I am following the Hands-on-ML book (Code in colab see cell 129). My aim is to use DQN agent on a custom-written grid world environment.

Grid-World environment:

class MyEnvironment(tf_agents.environments.py_environment.PyEnvironment):

def __init__(self, discount=1.0):
    super().__init__()
    self.discount = discount

    self._action_spec = tf_agents.specs.BoundedArraySpec(shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
    self._observation_spec = tf_agents.specs.BoundedArraySpec(shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)


def action_spec(self):
    return self._action_spec

def observation_spec(self):
    return self._observation_spec

def _reset(self):
    self._state = np.zeros(2, dtype=np.int32)
    obs = np.zeros((4, 4), dtype=np.int32)
    obs[self._state[0], self._state[1]] = 1
    return tf_agents.trajectories.time_step.restart(obs)

def _step(self, action):
    self._state += [(-1, 0), (+1, 0), (0, -1), (0, +1)][action]
    reward = 0
    obs = np.zeros((4, 4), dtype=np.int32)
    done = (self._state.min() < 0 or self._state.max() > 3)
    if not done:
        obs[self._state[0], self._state[1]] = 1
    if done or np.all(self._state == np.array([3, 3])):
        reward = -1 if done else +10
        return tf_agents.trajectories.time_step.termination(obs, reward)
    else:
        return tf_agents.trajectories.time_step.transition(obs, reward, self.discount)

And the Q network is:

tf_env = MyEnvironment()

preprocessing_layer = keras.layers.Lambda(lambda obs: tf.cast(obs, np.float32) / 255.)
conv_layer_params=[(32, (2, 2), 1)]
fc_layer_params=[512]
q_net = QNetwork(
    tf_env.observation_spec(),
    tf_env.action_spec(),
    preprocessing_layers=preprocessing_layer,
    conv_layer_params=conv_layer_params,
    fc_layer_params=fc_layer_params)

And finally, the DQN agent is

train_step = tf.Variable(0)
update_period = 4 # train the model every 4 steps
optimizer = keras.optimizers.RMSprop(lr=2.5e-4, rho=0.95, momentum=0.0, epsilon=0.00001, centered=True)
epsilon_fn = keras.optimizers.schedules.PolynomialDecay(initial_learning_rate=1.0, decay_steps=250000 // update_period, end_learning_rate=0.01)


agent = DqnAgent(tf_env.time_step_spec(),
    tf_env.action_spec(),
    q_network=q_net,
    optimizer=optimizer,
    target_update_period=2000, # <=> 32,000 ALE frames
    td_errors_loss_fn=keras.losses.Huber(reduction="none"),
    gamma=0.99, # discount factor
    train_step_counter=train_step,
    epsilon_greedy=lambda: epsilon_fn(train_step))  
agent.initialize()

Directly, running the code gave me the following error trace:

/usr/local/lib/python3.6/dist-packages/gin/config.py in gin_wrapper(*args, **kwargs) 1067 scope_info = " in scope '{}'".format(scope_str) if scope_str else '' 1068 err_str = err_str.format(name, fn_or_cls, scope_info) -> 1069 utils.augment_exception_message_and_reraise(e, err_str) 1070 1071 return gin_wrapper

/usr/local/lib/python3.6/dist-packages/gin/utils.py in augment_exception_message_and_reraise(exception, message)
     39   proxy = ExceptionProxy()
     40   ExceptionProxy.__qualname__ = type(exception).__qualname__
---> 41   raise proxy.with_traceback(exception.__traceback__) from None
     42 
     43 

/usr/local/lib/python3.6/dist-packages/gin/config.py in gin_wrapper(*args, **kwargs)
   1044 
   1045     try:
-> 1046       return fn(*new_args, **new_kwargs)
   1047     except Exception as e:  # pylint: disable=broad-except
   1048       err_str = ''

/usr/local/lib/python3.6/dist-packages/tf_agents/agents/dqn/dqn_agent.py in __init__(self, time_step_spec, action_spec, q_network, optimizer, observation_and_action_constraint_splitter, epsilon_greedy, n_step_update, boltzmann_temperature, emit_log_probability, target_q_network, target_update_tau, target_update_period, td_errors_loss_fn, gamma, reward_scale_factor, gradient_clipping, debug_summaries, summarize_grads_and_vars, train_step_counter, name)
    216     tf.Module.__init__(self, name=name)
    217 
--> 218     self._check_action_spec(action_spec)
    219 
    220     if epsilon_greedy is not None and boltzmann_temperature is not None:

/usr/local/lib/python3.6/dist-packages/tf_agents/agents/dqn/dqn_agent.py in _check_action_spec(self, action_spec)
    293 
    294     # TODO(oars): Get DQN working with more than one dim in the actions.
--> 295     if len(flat_action_spec) > 1 or flat_action_spec[0].shape.rank > 0:
    296       raise ValueError(
    297           'Only scalar actions are supported now, but action spec is: {}'

AttributeError: 'tuple' object has no attribute 'rank'
  In call to configurable 'DqnAgent' (<class 'tf_agents.agents.dqn.dqn_agent.DqnAgent'>)

What I have tried: Following the suggestions here The modified

self._action_spec = tf_agents.specs.BoundedArraySpec(shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
self._observation_spec = tf_agents.specs.BoundedArraySpec(shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)

to:

self._action_spec = tf_agents.specs.BoundedTensorSpec(
    shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
self._observation_spec = tf_agents.specs.BoundedTensorSpec(
    shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)

However, this resulted in:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-53-ce737b2b13fd> in <module>()
     21 
     22 
---> 23 agent = DqnAgent(tf_env.time_step_spec(),
     24     tf_env.action_spec(),
     25     q_network=q_net,

1 frames
/usr/local/lib/python3.6/dist-packages/tf_agents/environments/py_environment.py in time_step_spec(self)
    147       the step_type, reward, discount, and observation structure.
    148     """
--> 149     return ts.time_step_spec(self.observation_spec(), self.reward_spec())
    150 
    151   def current_time_step(self) -> ts.TimeStep:

/usr/local/lib/python3.6/dist-packages/tf_agents/trajectories/time_step.py in time_step_spec(observation_spec, reward_spec)
    388           'Expected observation and reward specs to both be either tensor or '
    389           'array specs, but saw spec values {} vs. {}'
--> 390           .format(first_observation_spec, first_reward_spec))
    391   if isinstance(first_observation_spec, tf.TypeSpec):
    392     return TimeStep(

TypeError: Expected observation and reward specs to both be either tensor or array specs, but saw spec values BoundedTensorSpec(shape=(4, 4), dtype=tf.int32, name='observation', minimum=array(0, dtype=int32), maximum=array(1, dtype=int32)) vs. ArraySpec(shape=(), dtype=dtype('float32'), name='reward')

I understand the reward is the issue: So, added an extra line

self._reward_spec = tf_agents.specs.TensorSpec((1,), np.dtype('float32'), 'reward')

but still resulted in the same error. Is there anyway I can solve this:

ANSWER

Answered 2021-Jun-02 at 22:36

You cannot use TensorSpec with PyEnvironment class objects, this is why your attempted solution does not work. A simple fix should be to use the original code

self._action_spec = tf_agents.specs.BoundedArraySpec(shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
self._observation_spec = tf_agents.specs.BoundedArraySpec(shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)

And then wrap your environment like so:

env= MyEnvironment()
tf_env = tf_agents.environments.tf_py_environment.TFPyEnvironment(env)

This is the simplest thing. Alternatively you could define your environment as a TFEnvironment class object, use TensorSpec and change all your environment code to operate on tensors.I do not recommend this to a beginner...

Source https://stackoverflow.com/questions/65743558

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install handson-ml2

Start by installing [Anaconda](https://www.anaconda.com/distribution/) (or [Miniconda](https://docs.conda.io/en/latest/miniconda.html)), [git](https://git-scm.com/downloads), and if you have a TensorFlow-compatible GPU, install the [GPU driver](https://www.nvidia.com/Download/index.aspx), as well as the appropriate version of CUDA and cuDNN (see TensorFlow’s documentation for more details).

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

Share this Page

share link
Consider Popular Machine Learning Libraries
Try Top Libraries by ageron
Compare Machine Learning Libraries with Highest Support
Compare Machine Learning Libraries with Highest Quality
Compare Machine Learning Libraries with Highest Security
Compare Machine Learning Libraries with Permissive License
Compare Machine Learning Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.