kandi background
Explore Kits

models | Models and examples built with TensorFlow | Machine Learning library

 by   tensorflow Python Version: v2.7.1 License: Non-SPDX

 by   tensorflow Python Version: v2.7.1 License: Non-SPDX

Download this library from

kandi X-RAY | models Summary

models is a Python library typically used in Institutions, Learning, Administration, Public Services, Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. models has no bugs, it has no vulnerabilities and it has high support. However models build file is not available and it has a Non-SPDX License. You can install using 'pip install models' or download it from GitHub, PyPI.
The TensorFlow Model Garden is a repository with a number of different implementations of state-of-the-art (SOTA) models and modeling solutions for TensorFlow users. We aim to demonstrate the best practices for modeling so that TensorFlow users can take full advantage of TensorFlow for their research and product development. To improve the transparency and reproducibility of our models, training logs on TensorBoard.dev are also provided for models to the extent possible though not all models are suitable.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • models has a highly active ecosystem.
  • It has 73392 star(s) with 45757 fork(s). There are 2855 watchers for this library.
  • There were 4 major release(s) in the last 12 months.
  • There are 1154 open issues and 5700 have been closed. On average issues are closed in 375 days. There are 170 open pull requests and 0 closed requests.
  • It has a positive sentiment in the developer community.
  • The latest version of models is v2.7.1
models Support
Best in #Machine Learning
Average in #Machine Learning
models Support
Best in #Machine Learning
Average in #Machine Learning

quality kandi Quality

  • models has 0 bugs and 0 code smells.
models Quality
Best in #Machine Learning
Average in #Machine Learning
models Quality
Best in #Machine Learning
Average in #Machine Learning

securitySecurity

  • models has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • models code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
models Security
Best in #Machine Learning
Average in #Machine Learning
models Security
Best in #Machine Learning
Average in #Machine Learning

license License

  • models has a Non-SPDX License.
  • Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
models License
Best in #Machine Learning
Average in #Machine Learning
models License
Best in #Machine Learning
Average in #Machine Learning

buildReuse

  • models releases are available to install and integrate.
  • Deployable package is available in PyPI.
  • models has no build file. You will be need to create the build yourself to build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
  • models saves you 66460 person hours of effort in developing the same functionality from scratch.
  • It has 123137 lines of code, 6615 functions and 988 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
models Reuse
Best in #Machine Learning
Average in #Machine Learning
models Reuse
Best in #Machine Learning
Average in #Machine Learning
Top functions reviewed by kandi - BETA

kandi has reviewed models and discovered the below as its top functions. This is intended to give you an instant insight into models implemented functionality, and help decide if they suit your requirements.

  • Run a custom training loop .
  • Base function for inception v2 .
  • Train a model .
  • Performs a batch_multiclass .
  • Base function for inception v3 .
  • Convert examples to features .
  • Base function for s3d .
  • Draw a bbox image
  • Creates a function that wraps the object detection .
  • Build a keras model .

models Key Features

Models and examples built with TensorFlow

Citing TensorFlow Model Garden

copy iconCopydownload iconDownload
@misc{tensorflowmodelgarden2020,
  author = {Hongkun Yu and Chen Chen and Xianzhi Du and Yeqing Li and
            Abdullah Rashwan and Le Hou and Pengchong Jin and Fan Yang and
            Frederick Liu and Jaeyoun Kim and Jing Li},
  title = {{TensorFlow Model Garden}},
  howpublished = {\url{https://github.com/tensorflow/models}},
  year = {2020}
}

Keras AttributeError: 'Sequential' object has no attribute 'predict_classes'

copy iconCopydownload iconDownload
predict_x=model.predict(X_test) 
classes_x=np.argmax(predict_x,axis=1)
-----------------------
y_pred = model.predict(X_test)
y_pred = np.round(y_pred).astype(int)
-----------------------
predictions = model.predict_classess(x_test)
predictions = (model.predict(x_test) > 0.5).astype("int32")
-----------------------
predictions = model.predict_classess(x_test)
predictions = (model.predict(x_test) > 0.5).astype("int32")
-----------------------
y_predict = np.argmax(model.predict(x_test), axis=-1)
-----------------------
    predicted = np.argmax(model.predict(token_list),axis=1)
-----------------------
predictions = (model.predict(X_test) > 0.5)*1 

cannot import name '_registerMatType' from 'cv2.cv2'

copy iconCopydownload iconDownload
C:\Windows\system32>pip list |findstr opencv
opencv-python                 4.5.2.52
opencv-python-headless        4.5.5.62
pip uninstall opencv-python-headless==4.5.5.62
pip install opencv-python-headless==4.5.2.52
-----------------------
C:\Windows\system32>pip list |findstr opencv
opencv-python                 4.5.2.52
opencv-python-headless        4.5.5.62
pip uninstall opencv-python-headless==4.5.5.62
pip install opencv-python-headless==4.5.2.52
-----------------------
C:\Windows\system32>pip list |findstr opencv
opencv-python                 4.5.2.52
opencv-python-headless        4.5.5.62
pip uninstall opencv-python-headless==4.5.5.62
pip install opencv-python-headless==4.5.2.52
-----------------------
C:\Windows\system32>pip list |findstr opencv
opencv-python                 4.5.2.52
opencv-python-headless        4.5.5.62
-----------------------
pip uninstall opencv-python
pip install opencv-python
-----------------------
pip list | grep opencv 
opencv-contrib-python    4.5.3.56
opencv-python            4.5.5.62
python -m pip install --upgrade opencv-contrib-python
-----------------------
pip list | grep opencv 
opencv-contrib-python    4.5.3.56
opencv-python            4.5.5.62
python -m pip install --upgrade opencv-contrib-python
-----------------------
pip list | grep opencv 
opencv-contrib-python    4.5.3.56
opencv-python            4.5.5.62
python -m pip install --upgrade opencv-contrib-python

How to use SageMaker Estimator for model training and saving

copy iconCopydownload iconDownload
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json  <--- From Estimator hyperparameter arg
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>        <--- From Estimator fit method inputs arg
│           └── <input data>
├── code
│   └── <code files>              <--- From Estimator src_dir arg
├── model
│   └── <model files>             <--- Location to save the trained model artifacts
└── output
    └── failure                   <--- Training job failure logs
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>
│           └── <input data>
├── model
│   └── <model files>
└── output
    └── failure
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    ckpt_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(ckpt_dir):
        os.makedirs(ckpt_dir)
    model.save(ckpt_dir)
import sagemaker
from sagemaker import get_execution_role

sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()

metric_definitions = [
    {"Name": "train:loss", "Regex": ".*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*"},
    {"Name": "train:accuracy", "Regex": ".*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*"},
    {
        "Name": "validation:accuracy",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*",
    },
    {
        "Name": "validation:loss",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*",
    },
    {
        "Name": "sec/sample",
        "Regex": ".* - \d+s (\d+)[mu]s/sample - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+",
    },
]

import uuid

checkpoint_s3_prefix = "checkpoints/{}".format(str(uuid.uuid4()))
checkpoint_s3_uri = "s3://{}/{}/".format(bucket, checkpoint_s3_prefix)

from sagemaker.tensorflow import TensorFlow

# --------------------------------------------------------------------------------
# 'trainingJobName' msut satisfy regular expression pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
# --------------------------------------------------------------------------------
base_job_name = "fashion-mnist"
hyperparameters = {
    "epochs": 2, 
    "batch-size": 64
}
estimator = TensorFlow(
    entry_point="fashion_mnist.py",
    source_dir="src",
    metric_definitions=metric_definitions,
    hyperparameters=hyperparameters,
    role=role,
    input_mode='File',
    framework_version="2.3.1",
    py_version="py37",
    instance_count=1,
    instance_type="ml.m5.xlarge",
    base_job_name=base_job_name,
    checkpoint_s3_uri=checkpoint_s3_uri,
    model_dir=False
)
estimator.fit()
import os
import argparse
import json
import multiprocessing

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow.keras import backend as K

print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution is: {}".format(tf.executing_eagerly()))
print("Keras version: {}".format(tf.keras.__version__))


image_width = 28
image_height = 28


def load_data():
    fashion_mnist = tf.keras.datasets.fashion_mnist
    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

    number_of_classes = len(set(y_train))
    print("number_of_classes", number_of_classes)

    x_train = x_train / 255.0
    x_test = x_test / 255.0
    x_full = np.concatenate((x_train, x_test), axis=0)
    print(x_full.shape)

    print(type(x_train))
    print(x_train.shape)
    print(x_train.dtype)
    print(y_train.shape)
    print(y_train.dtype)

    # ## Train
    # * C: Convolution layer
    # * P: Pooling layer
    # * B: Batch normalization layer
    # * F: Fully connected layer
    # * O: Output fully connected softmax layer

    # Reshape data based on channels first / channels last strategy.
    # This is dependent on whether you use TF, Theano or CNTK as backend.
    # Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
    if K.image_data_format() == 'channels_first':
        x = x_train.reshape(x_train.shape[0], 1, image_width, image_height)
        x_test = x_test.reshape(x_test.shape[0], 1, image_width, image_height)
        input_shape = (1, image_width, image_height)
    else:
        x_train = x_train.reshape(x_train.shape[0], image_width, image_height, 1)
        x_test = x_test.reshape(x_test.shape[0], image_width, image_height, 1)
        input_shape = (image_width, image_height, 1)

    return x_train, y_train, x_test, y_test, input_shape, number_of_classes

# tensorboard --logdir=/full_path_to_your_logs

validation_split = 0.2
verbosity = 1
use_multiprocessing = True
workers = multiprocessing.cpu_count()


def train(model, x, y, args):
    # SavedModel Output
    tensorflow_saved_model_path = os.path.join(args.model_dir, "tensorflow/saved_model/0")
    os.makedirs(tensorflow_saved_model_path, exist_ok=True)

    # Tensorboard Logs
    tensorboard_logs_path = os.path.join(args.model_dir, "tensorboard/")
    os.makedirs(tensorboard_logs_path, exist_ok=True)

    tensorboard_callback = tf.keras.callbacks.TensorBoard(
        log_dir=tensorboard_logs_path,
        write_graph=True,
        write_images=True,
        histogram_freq=1,  # How often to log histogram visualizations
        embeddings_freq=1,  # How often to log embedding visualizations
        update_freq="epoch",
    )  # How often to write logs (default: once per epoch)

    model.compile(
        optimizer='adam',
        loss=tf.keras.losses.sparse_categorical_crossentropy,
        metrics=['accuracy']
    )
    history = model.fit(
        x,
        y,
        shuffle=True,
        batch_size=args.batch_size,
        epochs=args.epochs,
        validation_split=validation_split,
        use_multiprocessing=use_multiprocessing,
        workers=workers,
        verbose=verbosity,
        callbacks=[
            tensorboard_callback
        ]
    )
    return history


def create_model(input_shape, number_of_classes):
    model = Sequential([
        Conv2D(
            name="conv01",
            filters=32,
            kernel_size=(3, 3),
            strides=(1, 1),
            padding="same",
            activation='relu',
            input_shape=input_shape
        ),
        MaxPooling2D(
            name="pool01",
            pool_size=(2, 2)
        ),
        Flatten(),  # 3D shape to 1D.
        BatchNormalization(
            name="batch_before_full01"
        ),
        Dense(
            name="full01",
            units=300,
            activation="relu"
        ),  # Fully connected layer
        Dense(
            name="output_softmax",
            units=number_of_classes,
            activation="softmax"
        )
    ])
    return model


def save_model(model, args):
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    model_save_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(model_save_dir):
        os.makedirs(model_save_dir)
    print(f"saving model at {model_save_dir}")
    model.save(model_save_dir)


def parse_args():
    # --------------------------------------------------------------------------------
    # https://docs.python.org/dev/library/argparse.html#dest
    # --------------------------------------------------------------------------------
    parser = argparse.ArgumentParser()

    # --------------------------------------------------------------------------------
    # hyperparameters Estimator argument are passed as command-line arguments to the script.
    # --------------------------------------------------------------------------------
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)

    # /opt/ml/model
    # sagemaker.tensorflow.estimator.TensorFlow override 'model_dir'.
    # See https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/\
    # sagemaker.tensorflow.html#sagemaker.tensorflow.estimator.TensorFlow
    parser.add_argument('--model_dir', type=str, default=os.environ['SM_MODEL_DIR'])

    # /opt/ml/output
    parser.add_argument("--output_dir", type=str, default=os.environ["SM_OUTPUT_DIR"])

    args = parser.parse_args()
    return args


if __name__ == "__main__":
    args = parse_args()
    print("---------- key/value args")
    for key, value in vars(args).items():
        print(f"{key}:{value}")

    x_train, y_train, x_test, y_test, input_shape, number_of_classes = load_data()
    model = create_model(input_shape, number_of_classes)

    history = train(model=model, x=x_train, y=y_train, args=args)
    print(history)
    
    save_model(model, args)
    results = model.evaluate(x_test, y_test, batch_size=100)
    print("test loss, test accuracy:", results)
2021-09-03 03:02:04 Starting - Starting the training job...
2021-09-03 03:02:16 Starting - Launching requested ML instancesProfilerReport-1630638122: InProgress
......
2021-09-03 03:03:17 Starting - Preparing the instances for training.........
2021-09-03 03:04:59 Downloading - Downloading input data
2021-09-03 03:04:59 Training - Downloading the training image...
2021-09-03 03:05:23 Training - Training image download completed. Training in progress.2021-09-03 03:05:23.966037: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:23.969704: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
2021-09-03 03:05:24.118054: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:26,842 sagemaker-training-toolkit INFO     Imported framework sagemaker_tensorflow_container.training
2021-09-03 03:05:26,852 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:27,734 sagemaker-training-toolkit INFO     Installing dependencies from requirements.txt:
/usr/local/bin/python3.7 -m pip install -r requirements.txt
WARNING: You are using pip version 21.0.1; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python3.7 -m pip install --upgrade pip' command.

2021-09-03 03:05:29,028 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,045 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,062 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,072 sagemaker-training-toolkit INFO     Invoking user script

Training Env:

{
    "additional_framework_parameters": {},
    "channel_input_dirs": {},
    "current_host": "algo-1",
    "framework_module": "sagemaker_tensorflow_container.training:main",
    "hosts": [
        "algo-1"
    ],
    "hyperparameters": {
        "batch-size": 64,
        "epochs": 2
    },
    "input_config_dir": "/opt/ml/input/config",
    "input_data_config": {},
    "input_dir": "/opt/ml/input",
    "is_master": true,
    "job_name": "fashion-mnist-2021-09-03-03-02-02-305",
    "log_level": 20,
    "master_hostname": "algo-1",
    "model_dir": "/opt/ml/model",
    "module_dir": "s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz",
    "module_name": "fashion_mnist",
    "network_interface_name": "eth0",
    "num_cpus": 4,
    "num_gpus": 0,
    "output_data_dir": "/opt/ml/output/data",
    "output_dir": "/opt/ml/output",
    "output_intermediate_dir": "/opt/ml/output/intermediate",
    "resource_config": {
        "current_host": "algo-1",
        "hosts": [
            "algo-1"
        ],
        "network_interface_name": "eth0"
    },
    "user_entry_point": "fashion_mnist.py"
}

Environment variables:

SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"batch-size":64,"epochs":2}
SM_USER_ENTRY_POINT=fashion_mnist.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=fashion_mnist
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=4
SM_NUM_GPUS=0
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"batch-size":64,"epochs":2},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"fashion-mnist-2021-09-03-03-02-02-305","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz","module_name":"fashion_mnist","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"fashion_mnist.py"}
SM_USER_ARGS=["--batch-size","64","--epochs","2"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_BATCH-SIZE=64
SM_HP_EPOCHS=2
PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages

Invoking script with the following command:

/usr/local/bin/python3.7 fashion_mnist.py --batch-size 64 --epochs 2


TensorFlow version: 2.3.1
Eager execution is: True
Keras version: 2.4.0
---------- key/value args
epochs:2
batch_size:64
model_dir:/opt/ml/model
output_dir:/opt/ml/output
-----------------------
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json  <--- From Estimator hyperparameter arg
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>        <--- From Estimator fit method inputs arg
│           └── <input data>
├── code
│   └── <code files>              <--- From Estimator src_dir arg
├── model
│   └── <model files>             <--- Location to save the trained model artifacts
└── output
    └── failure                   <--- Training job failure logs
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>
│           └── <input data>
├── model
│   └── <model files>
└── output
    └── failure
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    ckpt_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(ckpt_dir):
        os.makedirs(ckpt_dir)
    model.save(ckpt_dir)
import sagemaker
from sagemaker import get_execution_role

sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()

metric_definitions = [
    {"Name": "train:loss", "Regex": ".*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*"},
    {"Name": "train:accuracy", "Regex": ".*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*"},
    {
        "Name": "validation:accuracy",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*",
    },
    {
        "Name": "validation:loss",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*",
    },
    {
        "Name": "sec/sample",
        "Regex": ".* - \d+s (\d+)[mu]s/sample - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+",
    },
]

import uuid

checkpoint_s3_prefix = "checkpoints/{}".format(str(uuid.uuid4()))
checkpoint_s3_uri = "s3://{}/{}/".format(bucket, checkpoint_s3_prefix)

from sagemaker.tensorflow import TensorFlow

# --------------------------------------------------------------------------------
# 'trainingJobName' msut satisfy regular expression pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
# --------------------------------------------------------------------------------
base_job_name = "fashion-mnist"
hyperparameters = {
    "epochs": 2, 
    "batch-size": 64
}
estimator = TensorFlow(
    entry_point="fashion_mnist.py",
    source_dir="src",
    metric_definitions=metric_definitions,
    hyperparameters=hyperparameters,
    role=role,
    input_mode='File',
    framework_version="2.3.1",
    py_version="py37",
    instance_count=1,
    instance_type="ml.m5.xlarge",
    base_job_name=base_job_name,
    checkpoint_s3_uri=checkpoint_s3_uri,
    model_dir=False
)
estimator.fit()
import os
import argparse
import json
import multiprocessing

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow.keras import backend as K

print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution is: {}".format(tf.executing_eagerly()))
print("Keras version: {}".format(tf.keras.__version__))


image_width = 28
image_height = 28


def load_data():
    fashion_mnist = tf.keras.datasets.fashion_mnist
    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

    number_of_classes = len(set(y_train))
    print("number_of_classes", number_of_classes)

    x_train = x_train / 255.0
    x_test = x_test / 255.0
    x_full = np.concatenate((x_train, x_test), axis=0)
    print(x_full.shape)

    print(type(x_train))
    print(x_train.shape)
    print(x_train.dtype)
    print(y_train.shape)
    print(y_train.dtype)

    # ## Train
    # * C: Convolution layer
    # * P: Pooling layer
    # * B: Batch normalization layer
    # * F: Fully connected layer
    # * O: Output fully connected softmax layer

    # Reshape data based on channels first / channels last strategy.
    # This is dependent on whether you use TF, Theano or CNTK as backend.
    # Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
    if K.image_data_format() == 'channels_first':
        x = x_train.reshape(x_train.shape[0], 1, image_width, image_height)
        x_test = x_test.reshape(x_test.shape[0], 1, image_width, image_height)
        input_shape = (1, image_width, image_height)
    else:
        x_train = x_train.reshape(x_train.shape[0], image_width, image_height, 1)
        x_test = x_test.reshape(x_test.shape[0], image_width, image_height, 1)
        input_shape = (image_width, image_height, 1)

    return x_train, y_train, x_test, y_test, input_shape, number_of_classes

# tensorboard --logdir=/full_path_to_your_logs

validation_split = 0.2
verbosity = 1
use_multiprocessing = True
workers = multiprocessing.cpu_count()


def train(model, x, y, args):
    # SavedModel Output
    tensorflow_saved_model_path = os.path.join(args.model_dir, "tensorflow/saved_model/0")
    os.makedirs(tensorflow_saved_model_path, exist_ok=True)

    # Tensorboard Logs
    tensorboard_logs_path = os.path.join(args.model_dir, "tensorboard/")
    os.makedirs(tensorboard_logs_path, exist_ok=True)

    tensorboard_callback = tf.keras.callbacks.TensorBoard(
        log_dir=tensorboard_logs_path,
        write_graph=True,
        write_images=True,
        histogram_freq=1,  # How often to log histogram visualizations
        embeddings_freq=1,  # How often to log embedding visualizations
        update_freq="epoch",
    )  # How often to write logs (default: once per epoch)

    model.compile(
        optimizer='adam',
        loss=tf.keras.losses.sparse_categorical_crossentropy,
        metrics=['accuracy']
    )
    history = model.fit(
        x,
        y,
        shuffle=True,
        batch_size=args.batch_size,
        epochs=args.epochs,
        validation_split=validation_split,
        use_multiprocessing=use_multiprocessing,
        workers=workers,
        verbose=verbosity,
        callbacks=[
            tensorboard_callback
        ]
    )
    return history


def create_model(input_shape, number_of_classes):
    model = Sequential([
        Conv2D(
            name="conv01",
            filters=32,
            kernel_size=(3, 3),
            strides=(1, 1),
            padding="same",
            activation='relu',
            input_shape=input_shape
        ),
        MaxPooling2D(
            name="pool01",
            pool_size=(2, 2)
        ),
        Flatten(),  # 3D shape to 1D.
        BatchNormalization(
            name="batch_before_full01"
        ),
        Dense(
            name="full01",
            units=300,
            activation="relu"
        ),  # Fully connected layer
        Dense(
            name="output_softmax",
            units=number_of_classes,
            activation="softmax"
        )
    ])
    return model


def save_model(model, args):
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    model_save_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(model_save_dir):
        os.makedirs(model_save_dir)
    print(f"saving model at {model_save_dir}")
    model.save(model_save_dir)


def parse_args():
    # --------------------------------------------------------------------------------
    # https://docs.python.org/dev/library/argparse.html#dest
    # --------------------------------------------------------------------------------
    parser = argparse.ArgumentParser()

    # --------------------------------------------------------------------------------
    # hyperparameters Estimator argument are passed as command-line arguments to the script.
    # --------------------------------------------------------------------------------
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)

    # /opt/ml/model
    # sagemaker.tensorflow.estimator.TensorFlow override 'model_dir'.
    # See https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/\
    # sagemaker.tensorflow.html#sagemaker.tensorflow.estimator.TensorFlow
    parser.add_argument('--model_dir', type=str, default=os.environ['SM_MODEL_DIR'])

    # /opt/ml/output
    parser.add_argument("--output_dir", type=str, default=os.environ["SM_OUTPUT_DIR"])

    args = parser.parse_args()
    return args


if __name__ == "__main__":
    args = parse_args()
    print("---------- key/value args")
    for key, value in vars(args).items():
        print(f"{key}:{value}")

    x_train, y_train, x_test, y_test, input_shape, number_of_classes = load_data()
    model = create_model(input_shape, number_of_classes)

    history = train(model=model, x=x_train, y=y_train, args=args)
    print(history)
    
    save_model(model, args)
    results = model.evaluate(x_test, y_test, batch_size=100)
    print("test loss, test accuracy:", results)
2021-09-03 03:02:04 Starting - Starting the training job...
2021-09-03 03:02:16 Starting - Launching requested ML instancesProfilerReport-1630638122: InProgress
......
2021-09-03 03:03:17 Starting - Preparing the instances for training.........
2021-09-03 03:04:59 Downloading - Downloading input data
2021-09-03 03:04:59 Training - Downloading the training image...
2021-09-03 03:05:23 Training - Training image download completed. Training in progress.2021-09-03 03:05:23.966037: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:23.969704: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
2021-09-03 03:05:24.118054: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:26,842 sagemaker-training-toolkit INFO     Imported framework sagemaker_tensorflow_container.training
2021-09-03 03:05:26,852 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:27,734 sagemaker-training-toolkit INFO     Installing dependencies from requirements.txt:
/usr/local/bin/python3.7 -m pip install -r requirements.txt
WARNING: You are using pip version 21.0.1; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python3.7 -m pip install --upgrade pip' command.

2021-09-03 03:05:29,028 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,045 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,062 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,072 sagemaker-training-toolkit INFO     Invoking user script

Training Env:

{
    "additional_framework_parameters": {},
    "channel_input_dirs": {},
    "current_host": "algo-1",
    "framework_module": "sagemaker_tensorflow_container.training:main",
    "hosts": [
        "algo-1"
    ],
    "hyperparameters": {
        "batch-size": 64,
        "epochs": 2
    },
    "input_config_dir": "/opt/ml/input/config",
    "input_data_config": {},
    "input_dir": "/opt/ml/input",
    "is_master": true,
    "job_name": "fashion-mnist-2021-09-03-03-02-02-305",
    "log_level": 20,
    "master_hostname": "algo-1",
    "model_dir": "/opt/ml/model",
    "module_dir": "s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz",
    "module_name": "fashion_mnist",
    "network_interface_name": "eth0",
    "num_cpus": 4,
    "num_gpus": 0,
    "output_data_dir": "/opt/ml/output/data",
    "output_dir": "/opt/ml/output",
    "output_intermediate_dir": "/opt/ml/output/intermediate",
    "resource_config": {
        "current_host": "algo-1",
        "hosts": [
            "algo-1"
        ],
        "network_interface_name": "eth0"
    },
    "user_entry_point": "fashion_mnist.py"
}

Environment variables:

SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"batch-size":64,"epochs":2}
SM_USER_ENTRY_POINT=fashion_mnist.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=fashion_mnist
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=4
SM_NUM_GPUS=0
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"batch-size":64,"epochs":2},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"fashion-mnist-2021-09-03-03-02-02-305","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz","module_name":"fashion_mnist","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"fashion_mnist.py"}
SM_USER_ARGS=["--batch-size","64","--epochs","2"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_BATCH-SIZE=64
SM_HP_EPOCHS=2
PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages

Invoking script with the following command:

/usr/local/bin/python3.7 fashion_mnist.py --batch-size 64 --epochs 2


TensorFlow version: 2.3.1
Eager execution is: True
Keras version: 2.4.0
---------- key/value args
epochs:2
batch_size:64
model_dir:/opt/ml/model
output_dir:/opt/ml/output
-----------------------
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json  <--- From Estimator hyperparameter arg
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>        <--- From Estimator fit method inputs arg
│           └── <input data>
├── code
│   └── <code files>              <--- From Estimator src_dir arg
├── model
│   └── <model files>             <--- Location to save the trained model artifacts
└── output
    └── failure                   <--- Training job failure logs
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>
│           └── <input data>
├── model
│   └── <model files>
└── output
    └── failure
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    ckpt_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(ckpt_dir):
        os.makedirs(ckpt_dir)
    model.save(ckpt_dir)
import sagemaker
from sagemaker import get_execution_role

sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()

metric_definitions = [
    {"Name": "train:loss", "Regex": ".*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*"},
    {"Name": "train:accuracy", "Regex": ".*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*"},
    {
        "Name": "validation:accuracy",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*",
    },
    {
        "Name": "validation:loss",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*",
    },
    {
        "Name": "sec/sample",
        "Regex": ".* - \d+s (\d+)[mu]s/sample - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+",
    },
]

import uuid

checkpoint_s3_prefix = "checkpoints/{}".format(str(uuid.uuid4()))
checkpoint_s3_uri = "s3://{}/{}/".format(bucket, checkpoint_s3_prefix)

from sagemaker.tensorflow import TensorFlow

# --------------------------------------------------------------------------------
# 'trainingJobName' msut satisfy regular expression pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
# --------------------------------------------------------------------------------
base_job_name = "fashion-mnist"
hyperparameters = {
    "epochs": 2, 
    "batch-size": 64
}
estimator = TensorFlow(
    entry_point="fashion_mnist.py",
    source_dir="src",
    metric_definitions=metric_definitions,
    hyperparameters=hyperparameters,
    role=role,
    input_mode='File',
    framework_version="2.3.1",
    py_version="py37",
    instance_count=1,
    instance_type="ml.m5.xlarge",
    base_job_name=base_job_name,
    checkpoint_s3_uri=checkpoint_s3_uri,
    model_dir=False
)
estimator.fit()
import os
import argparse
import json
import multiprocessing

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow.keras import backend as K

print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution is: {}".format(tf.executing_eagerly()))
print("Keras version: {}".format(tf.keras.__version__))


image_width = 28
image_height = 28


def load_data():
    fashion_mnist = tf.keras.datasets.fashion_mnist
    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

    number_of_classes = len(set(y_train))
    print("number_of_classes", number_of_classes)

    x_train = x_train / 255.0
    x_test = x_test / 255.0
    x_full = np.concatenate((x_train, x_test), axis=0)
    print(x_full.shape)

    print(type(x_train))
    print(x_train.shape)
    print(x_train.dtype)
    print(y_train.shape)
    print(y_train.dtype)

    # ## Train
    # * C: Convolution layer
    # * P: Pooling layer
    # * B: Batch normalization layer
    # * F: Fully connected layer
    # * O: Output fully connected softmax layer

    # Reshape data based on channels first / channels last strategy.
    # This is dependent on whether you use TF, Theano or CNTK as backend.
    # Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
    if K.image_data_format() == 'channels_first':
        x = x_train.reshape(x_train.shape[0], 1, image_width, image_height)
        x_test = x_test.reshape(x_test.shape[0], 1, image_width, image_height)
        input_shape = (1, image_width, image_height)
    else:
        x_train = x_train.reshape(x_train.shape[0], image_width, image_height, 1)
        x_test = x_test.reshape(x_test.shape[0], image_width, image_height, 1)
        input_shape = (image_width, image_height, 1)

    return x_train, y_train, x_test, y_test, input_shape, number_of_classes

# tensorboard --logdir=/full_path_to_your_logs

validation_split = 0.2
verbosity = 1
use_multiprocessing = True
workers = multiprocessing.cpu_count()


def train(model, x, y, args):
    # SavedModel Output
    tensorflow_saved_model_path = os.path.join(args.model_dir, "tensorflow/saved_model/0")
    os.makedirs(tensorflow_saved_model_path, exist_ok=True)

    # Tensorboard Logs
    tensorboard_logs_path = os.path.join(args.model_dir, "tensorboard/")
    os.makedirs(tensorboard_logs_path, exist_ok=True)

    tensorboard_callback = tf.keras.callbacks.TensorBoard(
        log_dir=tensorboard_logs_path,
        write_graph=True,
        write_images=True,
        histogram_freq=1,  # How often to log histogram visualizations
        embeddings_freq=1,  # How often to log embedding visualizations
        update_freq="epoch",
    )  # How often to write logs (default: once per epoch)

    model.compile(
        optimizer='adam',
        loss=tf.keras.losses.sparse_categorical_crossentropy,
        metrics=['accuracy']
    )
    history = model.fit(
        x,
        y,
        shuffle=True,
        batch_size=args.batch_size,
        epochs=args.epochs,
        validation_split=validation_split,
        use_multiprocessing=use_multiprocessing,
        workers=workers,
        verbose=verbosity,
        callbacks=[
            tensorboard_callback
        ]
    )
    return history


def create_model(input_shape, number_of_classes):
    model = Sequential([
        Conv2D(
            name="conv01",
            filters=32,
            kernel_size=(3, 3),
            strides=(1, 1),
            padding="same",
            activation='relu',
            input_shape=input_shape
        ),
        MaxPooling2D(
            name="pool01",
            pool_size=(2, 2)
        ),
        Flatten(),  # 3D shape to 1D.
        BatchNormalization(
            name="batch_before_full01"
        ),
        Dense(
            name="full01",
            units=300,
            activation="relu"
        ),  # Fully connected layer
        Dense(
            name="output_softmax",
            units=number_of_classes,
            activation="softmax"
        )
    ])
    return model


def save_model(model, args):
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    model_save_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(model_save_dir):
        os.makedirs(model_save_dir)
    print(f"saving model at {model_save_dir}")
    model.save(model_save_dir)


def parse_args():
    # --------------------------------------------------------------------------------
    # https://docs.python.org/dev/library/argparse.html#dest
    # --------------------------------------------------------------------------------
    parser = argparse.ArgumentParser()

    # --------------------------------------------------------------------------------
    # hyperparameters Estimator argument are passed as command-line arguments to the script.
    # --------------------------------------------------------------------------------
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)

    # /opt/ml/model
    # sagemaker.tensorflow.estimator.TensorFlow override 'model_dir'.
    # See https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/\
    # sagemaker.tensorflow.html#sagemaker.tensorflow.estimator.TensorFlow
    parser.add_argument('--model_dir', type=str, default=os.environ['SM_MODEL_DIR'])

    # /opt/ml/output
    parser.add_argument("--output_dir", type=str, default=os.environ["SM_OUTPUT_DIR"])

    args = parser.parse_args()
    return args


if __name__ == "__main__":
    args = parse_args()
    print("---------- key/value args")
    for key, value in vars(args).items():
        print(f"{key}:{value}")

    x_train, y_train, x_test, y_test, input_shape, number_of_classes = load_data()
    model = create_model(input_shape, number_of_classes)

    history = train(model=model, x=x_train, y=y_train, args=args)
    print(history)
    
    save_model(model, args)
    results = model.evaluate(x_test, y_test, batch_size=100)
    print("test loss, test accuracy:", results)
2021-09-03 03:02:04 Starting - Starting the training job...
2021-09-03 03:02:16 Starting - Launching requested ML instancesProfilerReport-1630638122: InProgress
......
2021-09-03 03:03:17 Starting - Preparing the instances for training.........
2021-09-03 03:04:59 Downloading - Downloading input data
2021-09-03 03:04:59 Training - Downloading the training image...
2021-09-03 03:05:23 Training - Training image download completed. Training in progress.2021-09-03 03:05:23.966037: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:23.969704: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
2021-09-03 03:05:24.118054: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:26,842 sagemaker-training-toolkit INFO     Imported framework sagemaker_tensorflow_container.training
2021-09-03 03:05:26,852 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:27,734 sagemaker-training-toolkit INFO     Installing dependencies from requirements.txt:
/usr/local/bin/python3.7 -m pip install -r requirements.txt
WARNING: You are using pip version 21.0.1; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python3.7 -m pip install --upgrade pip' command.

2021-09-03 03:05:29,028 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,045 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,062 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,072 sagemaker-training-toolkit INFO     Invoking user script

Training Env:

{
    "additional_framework_parameters": {},
    "channel_input_dirs": {},
    "current_host": "algo-1",
    "framework_module": "sagemaker_tensorflow_container.training:main",
    "hosts": [
        "algo-1"
    ],
    "hyperparameters": {
        "batch-size": 64,
        "epochs": 2
    },
    "input_config_dir": "/opt/ml/input/config",
    "input_data_config": {},
    "input_dir": "/opt/ml/input",
    "is_master": true,
    "job_name": "fashion-mnist-2021-09-03-03-02-02-305",
    "log_level": 20,
    "master_hostname": "algo-1",
    "model_dir": "/opt/ml/model",
    "module_dir": "s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz",
    "module_name": "fashion_mnist",
    "network_interface_name": "eth0",
    "num_cpus": 4,
    "num_gpus": 0,
    "output_data_dir": "/opt/ml/output/data",
    "output_dir": "/opt/ml/output",
    "output_intermediate_dir": "/opt/ml/output/intermediate",
    "resource_config": {
        "current_host": "algo-1",
        "hosts": [
            "algo-1"
        ],
        "network_interface_name": "eth0"
    },
    "user_entry_point": "fashion_mnist.py"
}

Environment variables:

SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"batch-size":64,"epochs":2}
SM_USER_ENTRY_POINT=fashion_mnist.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=fashion_mnist
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=4
SM_NUM_GPUS=0
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"batch-size":64,"epochs":2},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"fashion-mnist-2021-09-03-03-02-02-305","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz","module_name":"fashion_mnist","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"fashion_mnist.py"}
SM_USER_ARGS=["--batch-size","64","--epochs","2"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_BATCH-SIZE=64
SM_HP_EPOCHS=2
PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages

Invoking script with the following command:

/usr/local/bin/python3.7 fashion_mnist.py --batch-size 64 --epochs 2


TensorFlow version: 2.3.1
Eager execution is: True
Keras version: 2.4.0
---------- key/value args
epochs:2
batch_size:64
model_dir:/opt/ml/model
output_dir:/opt/ml/output
-----------------------
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json  <--- From Estimator hyperparameter arg
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>        <--- From Estimator fit method inputs arg
│           └── <input data>
├── code
│   └── <code files>              <--- From Estimator src_dir arg
├── model
│   └── <model files>             <--- Location to save the trained model artifacts
└── output
    └── failure                   <--- Training job failure logs
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>
│           └── <input data>
├── model
│   └── <model files>
└── output
    └── failure
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    ckpt_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(ckpt_dir):
        os.makedirs(ckpt_dir)
    model.save(ckpt_dir)
import sagemaker
from sagemaker import get_execution_role

sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()

metric_definitions = [
    {"Name": "train:loss", "Regex": ".*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*"},
    {"Name": "train:accuracy", "Regex": ".*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*"},
    {
        "Name": "validation:accuracy",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*",
    },
    {
        "Name": "validation:loss",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*",
    },
    {
        "Name": "sec/sample",
        "Regex": ".* - \d+s (\d+)[mu]s/sample - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+",
    },
]

import uuid

checkpoint_s3_prefix = "checkpoints/{}".format(str(uuid.uuid4()))
checkpoint_s3_uri = "s3://{}/{}/".format(bucket, checkpoint_s3_prefix)

from sagemaker.tensorflow import TensorFlow

# --------------------------------------------------------------------------------
# 'trainingJobName' msut satisfy regular expression pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
# --------------------------------------------------------------------------------
base_job_name = "fashion-mnist"
hyperparameters = {
    "epochs": 2, 
    "batch-size": 64
}
estimator = TensorFlow(
    entry_point="fashion_mnist.py",
    source_dir="src",
    metric_definitions=metric_definitions,
    hyperparameters=hyperparameters,
    role=role,
    input_mode='File',
    framework_version="2.3.1",
    py_version="py37",
    instance_count=1,
    instance_type="ml.m5.xlarge",
    base_job_name=base_job_name,
    checkpoint_s3_uri=checkpoint_s3_uri,
    model_dir=False
)
estimator.fit()
import os
import argparse
import json
import multiprocessing

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow.keras import backend as K

print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution is: {}".format(tf.executing_eagerly()))
print("Keras version: {}".format(tf.keras.__version__))


image_width = 28
image_height = 28


def load_data():
    fashion_mnist = tf.keras.datasets.fashion_mnist
    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

    number_of_classes = len(set(y_train))
    print("number_of_classes", number_of_classes)

    x_train = x_train / 255.0
    x_test = x_test / 255.0
    x_full = np.concatenate((x_train, x_test), axis=0)
    print(x_full.shape)

    print(type(x_train))
    print(x_train.shape)
    print(x_train.dtype)
    print(y_train.shape)
    print(y_train.dtype)

    # ## Train
    # * C: Convolution layer
    # * P: Pooling layer
    # * B: Batch normalization layer
    # * F: Fully connected layer
    # * O: Output fully connected softmax layer

    # Reshape data based on channels first / channels last strategy.
    # This is dependent on whether you use TF, Theano or CNTK as backend.
    # Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
    if K.image_data_format() == 'channels_first':
        x = x_train.reshape(x_train.shape[0], 1, image_width, image_height)
        x_test = x_test.reshape(x_test.shape[0], 1, image_width, image_height)
        input_shape = (1, image_width, image_height)
    else:
        x_train = x_train.reshape(x_train.shape[0], image_width, image_height, 1)
        x_test = x_test.reshape(x_test.shape[0], image_width, image_height, 1)
        input_shape = (image_width, image_height, 1)

    return x_train, y_train, x_test, y_test, input_shape, number_of_classes

# tensorboard --logdir=/full_path_to_your_logs

validation_split = 0.2
verbosity = 1
use_multiprocessing = True
workers = multiprocessing.cpu_count()


def train(model, x, y, args):
    # SavedModel Output
    tensorflow_saved_model_path = os.path.join(args.model_dir, "tensorflow/saved_model/0")
    os.makedirs(tensorflow_saved_model_path, exist_ok=True)

    # Tensorboard Logs
    tensorboard_logs_path = os.path.join(args.model_dir, "tensorboard/")
    os.makedirs(tensorboard_logs_path, exist_ok=True)

    tensorboard_callback = tf.keras.callbacks.TensorBoard(
        log_dir=tensorboard_logs_path,
        write_graph=True,
        write_images=True,
        histogram_freq=1,  # How often to log histogram visualizations
        embeddings_freq=1,  # How often to log embedding visualizations
        update_freq="epoch",
    )  # How often to write logs (default: once per epoch)

    model.compile(
        optimizer='adam',
        loss=tf.keras.losses.sparse_categorical_crossentropy,
        metrics=['accuracy']
    )
    history = model.fit(
        x,
        y,
        shuffle=True,
        batch_size=args.batch_size,
        epochs=args.epochs,
        validation_split=validation_split,
        use_multiprocessing=use_multiprocessing,
        workers=workers,
        verbose=verbosity,
        callbacks=[
            tensorboard_callback
        ]
    )
    return history


def create_model(input_shape, number_of_classes):
    model = Sequential([
        Conv2D(
            name="conv01",
            filters=32,
            kernel_size=(3, 3),
            strides=(1, 1),
            padding="same",
            activation='relu',
            input_shape=input_shape
        ),
        MaxPooling2D(
            name="pool01",
            pool_size=(2, 2)
        ),
        Flatten(),  # 3D shape to 1D.
        BatchNormalization(
            name="batch_before_full01"
        ),
        Dense(
            name="full01",
            units=300,
            activation="relu"
        ),  # Fully connected layer
        Dense(
            name="output_softmax",
            units=number_of_classes,
            activation="softmax"
        )
    ])
    return model


def save_model(model, args):
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    model_save_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(model_save_dir):
        os.makedirs(model_save_dir)
    print(f"saving model at {model_save_dir}")
    model.save(model_save_dir)


def parse_args():
    # --------------------------------------------------------------------------------
    # https://docs.python.org/dev/library/argparse.html#dest
    # --------------------------------------------------------------------------------
    parser = argparse.ArgumentParser()

    # --------------------------------------------------------------------------------
    # hyperparameters Estimator argument are passed as command-line arguments to the script.
    # --------------------------------------------------------------------------------
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)

    # /opt/ml/model
    # sagemaker.tensorflow.estimator.TensorFlow override 'model_dir'.
    # See https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/\
    # sagemaker.tensorflow.html#sagemaker.tensorflow.estimator.TensorFlow
    parser.add_argument('--model_dir', type=str, default=os.environ['SM_MODEL_DIR'])

    # /opt/ml/output
    parser.add_argument("--output_dir", type=str, default=os.environ["SM_OUTPUT_DIR"])

    args = parser.parse_args()
    return args


if __name__ == "__main__":
    args = parse_args()
    print("---------- key/value args")
    for key, value in vars(args).items():
        print(f"{key}:{value}")

    x_train, y_train, x_test, y_test, input_shape, number_of_classes = load_data()
    model = create_model(input_shape, number_of_classes)

    history = train(model=model, x=x_train, y=y_train, args=args)
    print(history)
    
    save_model(model, args)
    results = model.evaluate(x_test, y_test, batch_size=100)
    print("test loss, test accuracy:", results)
2021-09-03 03:02:04 Starting - Starting the training job...
2021-09-03 03:02:16 Starting - Launching requested ML instancesProfilerReport-1630638122: InProgress
......
2021-09-03 03:03:17 Starting - Preparing the instances for training.........
2021-09-03 03:04:59 Downloading - Downloading input data
2021-09-03 03:04:59 Training - Downloading the training image...
2021-09-03 03:05:23 Training - Training image download completed. Training in progress.2021-09-03 03:05:23.966037: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:23.969704: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
2021-09-03 03:05:24.118054: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:26,842 sagemaker-training-toolkit INFO     Imported framework sagemaker_tensorflow_container.training
2021-09-03 03:05:26,852 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:27,734 sagemaker-training-toolkit INFO     Installing dependencies from requirements.txt:
/usr/local/bin/python3.7 -m pip install -r requirements.txt
WARNING: You are using pip version 21.0.1; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python3.7 -m pip install --upgrade pip' command.

2021-09-03 03:05:29,028 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,045 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,062 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,072 sagemaker-training-toolkit INFO     Invoking user script

Training Env:

{
    "additional_framework_parameters": {},
    "channel_input_dirs": {},
    "current_host": "algo-1",
    "framework_module": "sagemaker_tensorflow_container.training:main",
    "hosts": [
        "algo-1"
    ],
    "hyperparameters": {
        "batch-size": 64,
        "epochs": 2
    },
    "input_config_dir": "/opt/ml/input/config",
    "input_data_config": {},
    "input_dir": "/opt/ml/input",
    "is_master": true,
    "job_name": "fashion-mnist-2021-09-03-03-02-02-305",
    "log_level": 20,
    "master_hostname": "algo-1",
    "model_dir": "/opt/ml/model",
    "module_dir": "s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz",
    "module_name": "fashion_mnist",
    "network_interface_name": "eth0",
    "num_cpus": 4,
    "num_gpus": 0,
    "output_data_dir": "/opt/ml/output/data",
    "output_dir": "/opt/ml/output",
    "output_intermediate_dir": "/opt/ml/output/intermediate",
    "resource_config": {
        "current_host": "algo-1",
        "hosts": [
            "algo-1"
        ],
        "network_interface_name": "eth0"
    },
    "user_entry_point": "fashion_mnist.py"
}

Environment variables:

SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"batch-size":64,"epochs":2}
SM_USER_ENTRY_POINT=fashion_mnist.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=fashion_mnist
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=4
SM_NUM_GPUS=0
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"batch-size":64,"epochs":2},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"fashion-mnist-2021-09-03-03-02-02-305","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz","module_name":"fashion_mnist","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"fashion_mnist.py"}
SM_USER_ARGS=["--batch-size","64","--epochs","2"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_BATCH-SIZE=64
SM_HP_EPOCHS=2
PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages

Invoking script with the following command:

/usr/local/bin/python3.7 fashion_mnist.py --batch-size 64 --epochs 2


TensorFlow version: 2.3.1
Eager execution is: True
Keras version: 2.4.0
---------- key/value args
epochs:2
batch_size:64
model_dir:/opt/ml/model
output_dir:/opt/ml/output
-----------------------
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json  <--- From Estimator hyperparameter arg
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>        <--- From Estimator fit method inputs arg
│           └── <input data>
├── code
│   └── <code files>              <--- From Estimator src_dir arg
├── model
│   └── <model files>             <--- Location to save the trained model artifacts
└── output
    └── failure                   <--- Training job failure logs
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>
│           └── <input data>
├── model
│   └── <model files>
└── output
    └── failure
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    ckpt_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(ckpt_dir):
        os.makedirs(ckpt_dir)
    model.save(ckpt_dir)
import sagemaker
from sagemaker import get_execution_role

sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()

metric_definitions = [
    {"Name": "train:loss", "Regex": ".*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*"},
    {"Name": "train:accuracy", "Regex": ".*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*"},
    {
        "Name": "validation:accuracy",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*",
    },
    {
        "Name": "validation:loss",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*",
    },
    {
        "Name": "sec/sample",
        "Regex": ".* - \d+s (\d+)[mu]s/sample - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+",
    },
]

import uuid

checkpoint_s3_prefix = "checkpoints/{}".format(str(uuid.uuid4()))
checkpoint_s3_uri = "s3://{}/{}/".format(bucket, checkpoint_s3_prefix)

from sagemaker.tensorflow import TensorFlow

# --------------------------------------------------------------------------------
# 'trainingJobName' msut satisfy regular expression pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
# --------------------------------------------------------------------------------
base_job_name = "fashion-mnist"
hyperparameters = {
    "epochs": 2, 
    "batch-size": 64
}
estimator = TensorFlow(
    entry_point="fashion_mnist.py",
    source_dir="src",
    metric_definitions=metric_definitions,
    hyperparameters=hyperparameters,
    role=role,
    input_mode='File',
    framework_version="2.3.1",
    py_version="py37",
    instance_count=1,
    instance_type="ml.m5.xlarge",
    base_job_name=base_job_name,
    checkpoint_s3_uri=checkpoint_s3_uri,
    model_dir=False
)
estimator.fit()
import os
import argparse
import json
import multiprocessing

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow.keras import backend as K

print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution is: {}".format(tf.executing_eagerly()))
print("Keras version: {}".format(tf.keras.__version__))


image_width = 28
image_height = 28


def load_data():
    fashion_mnist = tf.keras.datasets.fashion_mnist
    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

    number_of_classes = len(set(y_train))
    print("number_of_classes", number_of_classes)

    x_train = x_train / 255.0
    x_test = x_test / 255.0
    x_full = np.concatenate((x_train, x_test), axis=0)
    print(x_full.shape)

    print(type(x_train))
    print(x_train.shape)
    print(x_train.dtype)
    print(y_train.shape)
    print(y_train.dtype)

    # ## Train
    # * C: Convolution layer
    # * P: Pooling layer
    # * B: Batch normalization layer
    # * F: Fully connected layer
    # * O: Output fully connected softmax layer

    # Reshape data based on channels first / channels last strategy.
    # This is dependent on whether you use TF, Theano or CNTK as backend.
    # Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
    if K.image_data_format() == 'channels_first':
        x = x_train.reshape(x_train.shape[0], 1, image_width, image_height)
        x_test = x_test.reshape(x_test.shape[0], 1, image_width, image_height)
        input_shape = (1, image_width, image_height)
    else:
        x_train = x_train.reshape(x_train.shape[0], image_width, image_height, 1)
        x_test = x_test.reshape(x_test.shape[0], image_width, image_height, 1)
        input_shape = (image_width, image_height, 1)

    return x_train, y_train, x_test, y_test, input_shape, number_of_classes

# tensorboard --logdir=/full_path_to_your_logs

validation_split = 0.2
verbosity = 1
use_multiprocessing = True
workers = multiprocessing.cpu_count()


def train(model, x, y, args):
    # SavedModel Output
    tensorflow_saved_model_path = os.path.join(args.model_dir, "tensorflow/saved_model/0")
    os.makedirs(tensorflow_saved_model_path, exist_ok=True)

    # Tensorboard Logs
    tensorboard_logs_path = os.path.join(args.model_dir, "tensorboard/")
    os.makedirs(tensorboard_logs_path, exist_ok=True)

    tensorboard_callback = tf.keras.callbacks.TensorBoard(
        log_dir=tensorboard_logs_path,
        write_graph=True,
        write_images=True,
        histogram_freq=1,  # How often to log histogram visualizations
        embeddings_freq=1,  # How often to log embedding visualizations
        update_freq="epoch",
    )  # How often to write logs (default: once per epoch)

    model.compile(
        optimizer='adam',
        loss=tf.keras.losses.sparse_categorical_crossentropy,
        metrics=['accuracy']
    )
    history = model.fit(
        x,
        y,
        shuffle=True,
        batch_size=args.batch_size,
        epochs=args.epochs,
        validation_split=validation_split,
        use_multiprocessing=use_multiprocessing,
        workers=workers,
        verbose=verbosity,
        callbacks=[
            tensorboard_callback
        ]
    )
    return history


def create_model(input_shape, number_of_classes):
    model = Sequential([
        Conv2D(
            name="conv01",
            filters=32,
            kernel_size=(3, 3),
            strides=(1, 1),
            padding="same",
            activation='relu',
            input_shape=input_shape
        ),
        MaxPooling2D(
            name="pool01",
            pool_size=(2, 2)
        ),
        Flatten(),  # 3D shape to 1D.
        BatchNormalization(
            name="batch_before_full01"
        ),
        Dense(
            name="full01",
            units=300,
            activation="relu"
        ),  # Fully connected layer
        Dense(
            name="output_softmax",
            units=number_of_classes,
            activation="softmax"
        )
    ])
    return model


def save_model(model, args):
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    model_save_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(model_save_dir):
        os.makedirs(model_save_dir)
    print(f"saving model at {model_save_dir}")
    model.save(model_save_dir)


def parse_args():
    # --------------------------------------------------------------------------------
    # https://docs.python.org/dev/library/argparse.html#dest
    # --------------------------------------------------------------------------------
    parser = argparse.ArgumentParser()

    # --------------------------------------------------------------------------------
    # hyperparameters Estimator argument are passed as command-line arguments to the script.
    # --------------------------------------------------------------------------------
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)

    # /opt/ml/model
    # sagemaker.tensorflow.estimator.TensorFlow override 'model_dir'.
    # See https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/\
    # sagemaker.tensorflow.html#sagemaker.tensorflow.estimator.TensorFlow
    parser.add_argument('--model_dir', type=str, default=os.environ['SM_MODEL_DIR'])

    # /opt/ml/output
    parser.add_argument("--output_dir", type=str, default=os.environ["SM_OUTPUT_DIR"])

    args = parser.parse_args()
    return args


if __name__ == "__main__":
    args = parse_args()
    print("---------- key/value args")
    for key, value in vars(args).items():
        print(f"{key}:{value}")

    x_train, y_train, x_test, y_test, input_shape, number_of_classes = load_data()
    model = create_model(input_shape, number_of_classes)

    history = train(model=model, x=x_train, y=y_train, args=args)
    print(history)
    
    save_model(model, args)
    results = model.evaluate(x_test, y_test, batch_size=100)
    print("test loss, test accuracy:", results)
2021-09-03 03:02:04 Starting - Starting the training job...
2021-09-03 03:02:16 Starting - Launching requested ML instancesProfilerReport-1630638122: InProgress
......
2021-09-03 03:03:17 Starting - Preparing the instances for training.........
2021-09-03 03:04:59 Downloading - Downloading input data
2021-09-03 03:04:59 Training - Downloading the training image...
2021-09-03 03:05:23 Training - Training image download completed. Training in progress.2021-09-03 03:05:23.966037: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:23.969704: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
2021-09-03 03:05:24.118054: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:26,842 sagemaker-training-toolkit INFO     Imported framework sagemaker_tensorflow_container.training
2021-09-03 03:05:26,852 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:27,734 sagemaker-training-toolkit INFO     Installing dependencies from requirements.txt:
/usr/local/bin/python3.7 -m pip install -r requirements.txt
WARNING: You are using pip version 21.0.1; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python3.7 -m pip install --upgrade pip' command.

2021-09-03 03:05:29,028 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,045 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,062 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,072 sagemaker-training-toolkit INFO     Invoking user script

Training Env:

{
    "additional_framework_parameters": {},
    "channel_input_dirs": {},
    "current_host": "algo-1",
    "framework_module": "sagemaker_tensorflow_container.training:main",
    "hosts": [
        "algo-1"
    ],
    "hyperparameters": {
        "batch-size": 64,
        "epochs": 2
    },
    "input_config_dir": "/opt/ml/input/config",
    "input_data_config": {},
    "input_dir": "/opt/ml/input",
    "is_master": true,
    "job_name": "fashion-mnist-2021-09-03-03-02-02-305",
    "log_level": 20,
    "master_hostname": "algo-1",
    "model_dir": "/opt/ml/model",
    "module_dir": "s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz",
    "module_name": "fashion_mnist",
    "network_interface_name": "eth0",
    "num_cpus": 4,
    "num_gpus": 0,
    "output_data_dir": "/opt/ml/output/data",
    "output_dir": "/opt/ml/output",
    "output_intermediate_dir": "/opt/ml/output/intermediate",
    "resource_config": {
        "current_host": "algo-1",
        "hosts": [
            "algo-1"
        ],
        "network_interface_name": "eth0"
    },
    "user_entry_point": "fashion_mnist.py"
}

Environment variables:

SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"batch-size":64,"epochs":2}
SM_USER_ENTRY_POINT=fashion_mnist.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=fashion_mnist
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=4
SM_NUM_GPUS=0
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"batch-size":64,"epochs":2},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"fashion-mnist-2021-09-03-03-02-02-305","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz","module_name":"fashion_mnist","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"fashion_mnist.py"}
SM_USER_ARGS=["--batch-size","64","--epochs","2"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_BATCH-SIZE=64
SM_HP_EPOCHS=2
PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages

Invoking script with the following command:

/usr/local/bin/python3.7 fashion_mnist.py --batch-size 64 --epochs 2


TensorFlow version: 2.3.1
Eager execution is: True
Keras version: 2.4.0
---------- key/value args
epochs:2
batch_size:64
model_dir:/opt/ml/model
output_dir:/opt/ml/output
-----------------------
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json  <--- From Estimator hyperparameter arg
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>        <--- From Estimator fit method inputs arg
│           └── <input data>
├── code
│   └── <code files>              <--- From Estimator src_dir arg
├── model
│   └── <model files>             <--- Location to save the trained model artifacts
└── output
    └── failure                   <--- Training job failure logs
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>
│           └── <input data>
├── model
│   └── <model files>
└── output
    └── failure
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    ckpt_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(ckpt_dir):
        os.makedirs(ckpt_dir)
    model.save(ckpt_dir)
import sagemaker
from sagemaker import get_execution_role

sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()

metric_definitions = [
    {"Name": "train:loss", "Regex": ".*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*"},
    {"Name": "train:accuracy", "Regex": ".*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*"},
    {
        "Name": "validation:accuracy",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*",
    },
    {
        "Name": "validation:loss",
        "Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*",
    },
    {
        "Name": "sec/sample",
        "Regex": ".* - \d+s (\d+)[mu]s/sample - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+",
    },
]

import uuid

checkpoint_s3_prefix = "checkpoints/{}".format(str(uuid.uuid4()))
checkpoint_s3_uri = "s3://{}/{}/".format(bucket, checkpoint_s3_prefix)

from sagemaker.tensorflow import TensorFlow

# --------------------------------------------------------------------------------
# 'trainingJobName' msut satisfy regular expression pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
# --------------------------------------------------------------------------------
base_job_name = "fashion-mnist"
hyperparameters = {
    "epochs": 2, 
    "batch-size": 64
}
estimator = TensorFlow(
    entry_point="fashion_mnist.py",
    source_dir="src",
    metric_definitions=metric_definitions,
    hyperparameters=hyperparameters,
    role=role,
    input_mode='File',
    framework_version="2.3.1",
    py_version="py37",
    instance_count=1,
    instance_type="ml.m5.xlarge",
    base_job_name=base_job_name,
    checkpoint_s3_uri=checkpoint_s3_uri,
    model_dir=False
)
estimator.fit()
import os
import argparse
import json
import multiprocessing

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow.keras import backend as K

print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution is: {}".format(tf.executing_eagerly()))
print("Keras version: {}".format(tf.keras.__version__))


image_width = 28
image_height = 28


def load_data():
    fashion_mnist = tf.keras.datasets.fashion_mnist
    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

    number_of_classes = len(set(y_train))
    print("number_of_classes", number_of_classes)

    x_train = x_train / 255.0
    x_test = x_test / 255.0
    x_full = np.concatenate((x_train, x_test), axis=0)
    print(x_full.shape)

    print(type(x_train))
    print(x_train.shape)
    print(x_train.dtype)
    print(y_train.shape)
    print(y_train.dtype)

    # ## Train
    # * C: Convolution layer
    # * P: Pooling layer
    # * B: Batch normalization layer
    # * F: Fully connected layer
    # * O: Output fully connected softmax layer

    # Reshape data based on channels first / channels last strategy.
    # This is dependent on whether you use TF, Theano or CNTK as backend.
    # Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
    if K.image_data_format() == 'channels_first':
        x = x_train.reshape(x_train.shape[0], 1, image_width, image_height)
        x_test = x_test.reshape(x_test.shape[0], 1, image_width, image_height)
        input_shape = (1, image_width, image_height)
    else:
        x_train = x_train.reshape(x_train.shape[0], image_width, image_height, 1)
        x_test = x_test.reshape(x_test.shape[0], image_width, image_height, 1)
        input_shape = (image_width, image_height, 1)

    return x_train, y_train, x_test, y_test, input_shape, number_of_classes

# tensorboard --logdir=/full_path_to_your_logs

validation_split = 0.2
verbosity = 1
use_multiprocessing = True
workers = multiprocessing.cpu_count()


def train(model, x, y, args):
    # SavedModel Output
    tensorflow_saved_model_path = os.path.join(args.model_dir, "tensorflow/saved_model/0")
    os.makedirs(tensorflow_saved_model_path, exist_ok=True)

    # Tensorboard Logs
    tensorboard_logs_path = os.path.join(args.model_dir, "tensorboard/")
    os.makedirs(tensorboard_logs_path, exist_ok=True)

    tensorboard_callback = tf.keras.callbacks.TensorBoard(
        log_dir=tensorboard_logs_path,
        write_graph=True,
        write_images=True,
        histogram_freq=1,  # How often to log histogram visualizations
        embeddings_freq=1,  # How often to log embedding visualizations
        update_freq="epoch",
    )  # How often to write logs (default: once per epoch)

    model.compile(
        optimizer='adam',
        loss=tf.keras.losses.sparse_categorical_crossentropy,
        metrics=['accuracy']
    )
    history = model.fit(
        x,
        y,
        shuffle=True,
        batch_size=args.batch_size,
        epochs=args.epochs,
        validation_split=validation_split,
        use_multiprocessing=use_multiprocessing,
        workers=workers,
        verbose=verbosity,
        callbacks=[
            tensorboard_callback
        ]
    )
    return history


def create_model(input_shape, number_of_classes):
    model = Sequential([
        Conv2D(
            name="conv01",
            filters=32,
            kernel_size=(3, 3),
            strides=(1, 1),
            padding="same",
            activation='relu',
            input_shape=input_shape
        ),
        MaxPooling2D(
            name="pool01",
            pool_size=(2, 2)
        ),
        Flatten(),  # 3D shape to 1D.
        BatchNormalization(
            name="batch_before_full01"
        ),
        Dense(
            name="full01",
            units=300,
            activation="relu"
        ),  # Fully connected layer
        Dense(
            name="output_softmax",
            units=number_of_classes,
            activation="softmax"
        )
    ])
    return model


def save_model(model, args):
    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    model_save_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(model_save_dir):
        os.makedirs(model_save_dir)
    print(f"saving model at {model_save_dir}")
    model.save(model_save_dir)


def parse_args():
    # --------------------------------------------------------------------------------
    # https://docs.python.org/dev/library/argparse.html#dest
    # --------------------------------------------------------------------------------
    parser = argparse.ArgumentParser()

    # --------------------------------------------------------------------------------
    # hyperparameters Estimator argument are passed as command-line arguments to the script.
    # --------------------------------------------------------------------------------
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)

    # /opt/ml/model
    # sagemaker.tensorflow.estimator.TensorFlow override 'model_dir'.
    # See https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/\
    # sagemaker.tensorflow.html#sagemaker.tensorflow.estimator.TensorFlow
    parser.add_argument('--model_dir', type=str, default=os.environ['SM_MODEL_DIR'])

    # /opt/ml/output
    parser.add_argument("--output_dir", type=str, default=os.environ["SM_OUTPUT_DIR"])

    args = parser.parse_args()
    return args


if __name__ == "__main__":
    args = parse_args()
    print("---------- key/value args")
    for key, value in vars(args).items():
        print(f"{key}:{value}")

    x_train, y_train, x_test, y_test, input_shape, number_of_classes = load_data()
    model = create_model(input_shape, number_of_classes)

    history = train(model=model, x=x_train, y=y_train, args=args)
    print(history)
    
    save_model(model, args)
    results = model.evaluate(x_test, y_test, batch_size=100)
    print("test loss, test accuracy:", results)
2021-09-03 03:02:04 Starting - Starting the training job...
2021-09-03 03:02:16 Starting - Launching requested ML instancesProfilerReport-1630638122: InProgress
......
2021-09-03 03:03:17 Starting - Preparing the instances for training.........
2021-09-03 03:04:59 Downloading - Downloading input data
2021-09-03 03:04:59 Training - Downloading the training image...
2021-09-03 03:05:23 Training - Training image download completed. Training in progress.2021-09-03 03:05:23.966037: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:23.969704: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
2021-09-03 03:05:24.118054: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
2021-09-03 03:05:26,842 sagemaker-training-toolkit INFO     Imported framework sagemaker_tensorflow_container.training
2021-09-03 03:05:26,852 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:27,734 sagemaker-training-toolkit INFO     Installing dependencies from requirements.txt:
/usr/local/bin/python3.7 -m pip install -r requirements.txt
WARNING: You are using pip version 21.0.1; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python3.7 -m pip install --upgrade pip' command.

2021-09-03 03:05:29,028 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,045 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,062 sagemaker-training-toolkit INFO     No GPUs detected (normal if no gpus installed)
2021-09-03 03:05:29,072 sagemaker-training-toolkit INFO     Invoking user script

Training Env:

{
    "additional_framework_parameters": {},
    "channel_input_dirs": {},
    "current_host": "algo-1",
    "framework_module": "sagemaker_tensorflow_container.training:main",
    "hosts": [
        "algo-1"
    ],
    "hyperparameters": {
        "batch-size": 64,
        "epochs": 2
    },
    "input_config_dir": "/opt/ml/input/config",
    "input_data_config": {},
    "input_dir": "/opt/ml/input",
    "is_master": true,
    "job_name": "fashion-mnist-2021-09-03-03-02-02-305",
    "log_level": 20,
    "master_hostname": "algo-1",
    "model_dir": "/opt/ml/model",
    "module_dir": "s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz",
    "module_name": "fashion_mnist",
    "network_interface_name": "eth0",
    "num_cpus": 4,
    "num_gpus": 0,
    "output_data_dir": "/opt/ml/output/data",
    "output_dir": "/opt/ml/output",
    "output_intermediate_dir": "/opt/ml/output/intermediate",
    "resource_config": {
        "current_host": "algo-1",
        "hosts": [
            "algo-1"
        ],
        "network_interface_name": "eth0"
    },
    "user_entry_point": "fashion_mnist.py"
}

Environment variables:

SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"batch-size":64,"epochs":2}
SM_USER_ENTRY_POINT=fashion_mnist.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=fashion_mnist
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=4
SM_NUM_GPUS=0
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"batch-size":64,"epochs":2},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"fashion-mnist-2021-09-03-03-02-02-305","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-316725000538/fashion-mnist-2021-09-03-03-02-02-305/source/sourcedir.tar.gz","module_name":"fashion_mnist","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"fashion_mnist.py"}
SM_USER_ARGS=["--batch-size","64","--epochs","2"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_BATCH-SIZE=64
SM_HP_EPOCHS=2
PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages

Invoking script with the following command:

/usr/local/bin/python3.7 fashion_mnist.py --batch-size 64 --epochs 2


TensorFlow version: 2.3.1
Eager execution is: True
Keras version: 2.4.0
---------- key/value args
epochs:2
batch_size:64
model_dir:/opt/ml/model
output_dir:/opt/ml/output

How to fix the error (TypeError: Cannot assign to read only property 'map' of object '#&lt;QueryCursor&gt;')

copy iconCopydownload iconDownload
FROM node:alpine

FROM node:16.5.0-alpine

WORKDIR /app
COPY package.json .
RUN npm install --only=prod
COPY . .

CMD ["npm", "start"]
-----------------------
FROM node:alpine

FROM node:16.5.0-alpine

WORKDIR /app
COPY package.json .
RUN npm install --only=prod
COPY . .

CMD ["npm", "start"]

Laravel eloquent with multiple inner joins

copy iconCopydownload iconDownload
Entries::with(['Tasks','Class','Subclass'])->get();
Entries::with(['Tasks','Class.Subclass'])->get();
-----------------------
Entries::with(['Tasks','Class','Subclass'])->get();
Entries::with(['Tasks','Class.Subclass'])->get();
-----------------------
$timeBreakDown = Entries::where('opportunity_id',$opportunity->id)->load('Tasks','Class.SubClass')->get();
-----------------------
$timeBreakDown = Entries::select('entries.task_id, entries.opportunity_id', DB:raw('SUM(Entries.total_duration) as duration), task_class.class, task_subclasses.sub_class as subclass)
join('tasks', [['tasks.id', 'entries.task_id']])
join('task_class', [['task_class.id', 'entries.class_id']])
join('task_subclasses', [['task_subclasses.id', 'entries.subclass_id']])
->where('entries.opportunity_id', $opportunity->id)
->groupBy('entries.task_id')
->get();
-----------------------
$timeBreakDown = Entries::join('tasks', 'tasks.id', '=', 'entries.task_id')
            ->join('class', 'class.id', '=', 'entries.class_id')
            ->join('subclass', 'subclass.id', '=', 'entries.subclass_id')
            ->select(
                'entries.task_id',
                'entries.opportunity_id',
                \DB::raw('SUM(entries.total_duration) as duration'),
                'class.class',
                'subclass.sub_class as subclass')
            ->where('entries.opportunity_id', $opportunity->id)
            ->groupBy('entries.task_id')
            ->get();
-----------------------
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Entries extends Model
{
    /**
     * Get the phone associated with the user.
     */
    public function task()
    {
        return $this->hasOne(Tasks::class);
    }
}
<?php
    
  namespace App\Models;
   
  use Illuminate\Database\Eloquent\Model;
    
  class Tasks extends Model
  {
      /**
       * Get the user that owns the phone.
       */
      public function entry()
      {
          return $this->belongsTo(Entries::class);
      }
  }
-----------------------
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Entries extends Model
{
    /**
     * Get the phone associated with the user.
     */
    public function task()
    {
        return $this->hasOne(Tasks::class);
    }
}
<?php
    
  namespace App\Models;
   
  use Illuminate\Database\Eloquent\Model;
    
  class Tasks extends Model
  {
      /**
       * Get the user that owns the phone.
       */
      public function entry()
      {
          return $this->belongsTo(Entries::class);
      }
  }
-----------------------
class Class extends Model
{ 

  protected $with = ['entries', 'subclass', 'task'];

  public function entries()
  {
      return $this->hasManyThrough(\App\Models\Entries::class, \App\Models\Task::class);
  }

  public function subClass()
  {
      return $this->hasOne(\App\Models\subClass::class);
  }

  public function tasks()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
class Entry extends Model
{ 

  protected $with = ['task'];

  public function task()
  {
    return $this->belongsTo(Task::class);
  }
}
class SubClass extends Model
{ 

  protected $with = ['class'];

  public function class()
  {
    return $this->belongsTo(\App\Models\Class::class);
  }
}
class Task extends Model
{ 

  protected $with = ['entries', 'class'];

  public function entries()
  {
      return $this->hasMany(\App\Models\Class::class);
  }

  public function class()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
$entry = Entry::findOrFail('id');
$entry->task->class->subClass->name;
$class = Class::findOrFail($class->id);
$subclass_name = $class->subclass->name;
$entries = $class->tasks->entries;
-----------------------
class Class extends Model
{ 

  protected $with = ['entries', 'subclass', 'task'];

  public function entries()
  {
      return $this->hasManyThrough(\App\Models\Entries::class, \App\Models\Task::class);
  }

  public function subClass()
  {
      return $this->hasOne(\App\Models\subClass::class);
  }

  public function tasks()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
class Entry extends Model
{ 

  protected $with = ['task'];

  public function task()
  {
    return $this->belongsTo(Task::class);
  }
}
class SubClass extends Model
{ 

  protected $with = ['class'];

  public function class()
  {
    return $this->belongsTo(\App\Models\Class::class);
  }
}
class Task extends Model
{ 

  protected $with = ['entries', 'class'];

  public function entries()
  {
      return $this->hasMany(\App\Models\Class::class);
  }

  public function class()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
$entry = Entry::findOrFail('id');
$entry->task->class->subClass->name;
$class = Class::findOrFail($class->id);
$subclass_name = $class->subclass->name;
$entries = $class->tasks->entries;
-----------------------
class Class extends Model
{ 

  protected $with = ['entries', 'subclass', 'task'];

  public function entries()
  {
      return $this->hasManyThrough(\App\Models\Entries::class, \App\Models\Task::class);
  }

  public function subClass()
  {
      return $this->hasOne(\App\Models\subClass::class);
  }

  public function tasks()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
class Entry extends Model
{ 

  protected $with = ['task'];

  public function task()
  {
    return $this->belongsTo(Task::class);
  }
}
class SubClass extends Model
{ 

  protected $with = ['class'];

  public function class()
  {
    return $this->belongsTo(\App\Models\Class::class);
  }
}
class Task extends Model
{ 

  protected $with = ['entries', 'class'];

  public function entries()
  {
      return $this->hasMany(\App\Models\Class::class);
  }

  public function class()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
$entry = Entry::findOrFail('id');
$entry->task->class->subClass->name;
$class = Class::findOrFail($class->id);
$subclass_name = $class->subclass->name;
$entries = $class->tasks->entries;
-----------------------
class Class extends Model
{ 

  protected $with = ['entries', 'subclass', 'task'];

  public function entries()
  {
      return $this->hasManyThrough(\App\Models\Entries::class, \App\Models\Task::class);
  }

  public function subClass()
  {
      return $this->hasOne(\App\Models\subClass::class);
  }

  public function tasks()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
class Entry extends Model
{ 

  protected $with = ['task'];

  public function task()
  {
    return $this->belongsTo(Task::class);
  }
}
class SubClass extends Model
{ 

  protected $with = ['class'];

  public function class()
  {
    return $this->belongsTo(\App\Models\Class::class);
  }
}
class Task extends Model
{ 

  protected $with = ['entries', 'class'];

  public function entries()
  {
      return $this->hasMany(\App\Models\Class::class);
  }

  public function class()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
$entry = Entry::findOrFail('id');
$entry->task->class->subClass->name;
$class = Class::findOrFail($class->id);
$subclass_name = $class->subclass->name;
$entries = $class->tasks->entries;
-----------------------
class Class extends Model
{ 

  protected $with = ['entries', 'subclass', 'task'];

  public function entries()
  {
      return $this->hasManyThrough(\App\Models\Entries::class, \App\Models\Task::class);
  }

  public function subClass()
  {
      return $this->hasOne(\App\Models\subClass::class);
  }

  public function tasks()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
class Entry extends Model
{ 

  protected $with = ['task'];

  public function task()
  {
    return $this->belongsTo(Task::class);
  }
}
class SubClass extends Model
{ 

  protected $with = ['class'];

  public function class()
  {
    return $this->belongsTo(\App\Models\Class::class);
  }
}
class Task extends Model
{ 

  protected $with = ['entries', 'class'];

  public function entries()
  {
      return $this->hasMany(\App\Models\Class::class);
  }

  public function class()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
$entry = Entry::findOrFail('id');
$entry->task->class->subClass->name;
$class = Class::findOrFail($class->id);
$subclass_name = $class->subclass->name;
$entries = $class->tasks->entries;
-----------------------
class Class extends Model
{ 

  protected $with = ['entries', 'subclass', 'task'];

  public function entries()
  {
      return $this->hasManyThrough(\App\Models\Entries::class, \App\Models\Task::class);
  }

  public function subClass()
  {
      return $this->hasOne(\App\Models\subClass::class);
  }

  public function tasks()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
class Entry extends Model
{ 

  protected $with = ['task'];

  public function task()
  {
    return $this->belongsTo(Task::class);
  }
}
class SubClass extends Model
{ 

  protected $with = ['class'];

  public function class()
  {
    return $this->belongsTo(\App\Models\Class::class);
  }
}
class Task extends Model
{ 

  protected $with = ['entries', 'class'];

  public function entries()
  {
      return $this->hasMany(\App\Models\Class::class);
  }

  public function class()
  {
    return $this->hasMany(\App\Models\Task::class);
  }
}
$entry = Entry::findOrFail('id');
$entry->task->class->subClass->name;
$class = Class::findOrFail($class->id);
$subclass_name = $class->subclass->name;
$entries = $class->tasks->entries;
-----------------------
DB:: table('table name')->join('tasks','task_id','=','tasks.id')
->join('task_class', 'tasks.subclass_id','=','task_class.id')
->join('task_subclasses','tasks.subclass_id','=', 'task_subclasses.id')
->selectRaw('entries.task_id,
            task_subclasses.opportunity_id,
            SUM(entries.total_duration) as duration,
            task_class.class as class,
            task_subclasses.sub_class as subclass') 
->where(['entries.opportunity_id'=>$opportunity->id])
->groupBy('enteries.task_id')->get();
-----------------------
<?php

namespace App\Models;


use Illuminate\Database\Eloquent\Model;

class Entries extends Model
{
    public function Tasks(){
        return $this->hasOne(Tasks::class);
    }
    public function Class(){
        return $this->hasMany(Classes::class);
    }
    public function SubClasses(){
        return $this->hasOne(SubClasses::class);
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Tasks extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "id", "task_id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Classes extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "class_id", "id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class SubClasses extends Model
{
    public function Entries(){
        
        return $this->belongsTo(Entries::class, "id", "subclass_id");
        
    }
}
Entries::with([
        "Tasks",
        "Classes",
        "SubClasses"
    ])
    ->where("opportunity_id", $opportunity->id)
    ->groupBy("task_id")
    ->get();
-----------------------
<?php

namespace App\Models;


use Illuminate\Database\Eloquent\Model;

class Entries extends Model
{
    public function Tasks(){
        return $this->hasOne(Tasks::class);
    }
    public function Class(){
        return $this->hasMany(Classes::class);
    }
    public function SubClasses(){
        return $this->hasOne(SubClasses::class);
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Tasks extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "id", "task_id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Classes extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "class_id", "id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class SubClasses extends Model
{
    public function Entries(){
        
        return $this->belongsTo(Entries::class, "id", "subclass_id");
        
    }
}
Entries::with([
        "Tasks",
        "Classes",
        "SubClasses"
    ])
    ->where("opportunity_id", $opportunity->id)
    ->groupBy("task_id")
    ->get();
-----------------------
<?php

namespace App\Models;


use Illuminate\Database\Eloquent\Model;

class Entries extends Model
{
    public function Tasks(){
        return $this->hasOne(Tasks::class);
    }
    public function Class(){
        return $this->hasMany(Classes::class);
    }
    public function SubClasses(){
        return $this->hasOne(SubClasses::class);
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Tasks extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "id", "task_id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Classes extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "class_id", "id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class SubClasses extends Model
{
    public function Entries(){
        
        return $this->belongsTo(Entries::class, "id", "subclass_id");
        
    }
}
Entries::with([
        "Tasks",
        "Classes",
        "SubClasses"
    ])
    ->where("opportunity_id", $opportunity->id)
    ->groupBy("task_id")
    ->get();
-----------------------
<?php

namespace App\Models;


use Illuminate\Database\Eloquent\Model;

class Entries extends Model
{
    public function Tasks(){
        return $this->hasOne(Tasks::class);
    }
    public function Class(){
        return $this->hasMany(Classes::class);
    }
    public function SubClasses(){
        return $this->hasOne(SubClasses::class);
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Tasks extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "id", "task_id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Classes extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "class_id", "id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class SubClasses extends Model
{
    public function Entries(){
        
        return $this->belongsTo(Entries::class, "id", "subclass_id");
        
    }
}
Entries::with([
        "Tasks",
        "Classes",
        "SubClasses"
    ])
    ->where("opportunity_id", $opportunity->id)
    ->groupBy("task_id")
    ->get();
-----------------------
<?php

namespace App\Models;


use Illuminate\Database\Eloquent\Model;

class Entries extends Model
{
    public function Tasks(){
        return $this->hasOne(Tasks::class);
    }
    public function Class(){
        return $this->hasMany(Classes::class);
    }
    public function SubClasses(){
        return $this->hasOne(SubClasses::class);
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Tasks extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "id", "task_id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Classes extends Model
{
    public function Entries(){
        return $this->belongsTo(Entries::class, "class_id", "id");
    }
}
<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class SubClasses extends Model
{
    public function Entries(){
        
        return $this->belongsTo(Entries::class, "id", "subclass_id");
        
    }
}
Entries::with([
        "Tasks",
        "Classes",
        "SubClasses"
    ])
    ->where("opportunity_id", $opportunity->id)
    ->groupBy("task_id")
    ->get();

ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'

copy iconCopydownload iconDownload
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
    BatchNormalization, SeparableConv2D, MaxPooling2D, Activation, Flatten, Dropout, Dense
)
from tensorflow.keras import backend as K


class CancerNet:
    @staticmethod
    def build(width, height, depth, classes):
        model = Sequential()
        shape = (height, width, depth)
        channelDim = -1

        if K.image_data_format() == "channels_first":
            shape = (depth, height, width)
            channelDim = 1

        model.add(SeparableConv2D(32, (3, 3), padding="same", input_shape=shape))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        model.add(SeparableConv2D(64, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(SeparableConv2D(64, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        model.add(SeparableConv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(SeparableConv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(SeparableConv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        model.add(Flatten())
        model.add(Dense(256))
        model.add(Activation("relu"))
        model.add(BatchNormalization())
        model.add(Dropout(0.5))

        model.add(Dense(classes))
        model.add(Activation("softmax"))

        return model

model = CancerNet()
-----------------------
from tensorflow.keras.layers import BatchNormalization

How to setup Django permissions to be specific to a certain model's instances?

copy iconCopydownload iconDownload
# model

class Project(models.Model):
   pass

class Page(models.Model):
    project = models.ForeignKey(Project)

class FirstVisit(models.Model):
    url = models.URLField()
    user = models.ForeignKey(User)
    page = models.ForeignKey(Page)


#views.py

def my_view(request):
   if not FisrtVisit.objects.filter(user=request.user.id, url=request.path, page=request.page.id).exists():
      # first time visit == first instance
      #your code...
      FisrtVisit(user=request.user, url=request.path, page=request.page.id).save()
-----------------------
from django.db import models
from my_app import Project

class CustomUser(...)
    projects = models.ManyToManyField(Project)

    def has_perm(self, perm, obj=None):
        if isinstance(obj, Project):
            if not obj in self.projects.objects.all():
                return False
        return super().has_perm(perm, obj)
-----------------------
# models.py

from django.contrib import auth
from django.contrib.auth.models import AbstractUser, Group, Permission
from django.core.exceptions import PermissionDenied
from django.db import models

from showbase.users.models import User


class ProjectPermission(models.Model):
    """A permission that is valid for a specific project."""

    project = models.ForeignKey(Project, on_delete=models.CASCADE)
    base_permission = models.ForeignKey(
        Permission, on_delete=models.CASCADE, related_name="project_permission"
    )
    users = models.ManyToManyField(User, related_name="user_project_permissions")
    groups = models.ManyToManyField(Group, related_name="project_permissions")

    class Meta:
        indexes = [models.Index(fields=["project", "base_permission"])]
        unique_together = ["project", "base_permission"]


def _user_has_project_perm(user, perm, project):
    """
    A backend can raise `PermissionDenied` to short-circuit permission checking.
    """
    for backend in auth.get_backends():
        if not hasattr(backend, "has_project_perm"):
            continue
        try:
            if backend.has_project_perm(user, perm, project):
                return True
        except PermissionDenied:
            return False
    return False


class User(AbstractUser):
    def has_project_perm(self, perm, project):
        """Return True if the user has the specified permission in a project."""
        # Active superusers have all permissions.
        if self.is_active and self.is_superuser:
            return True

        # Otherwise we need to check the backends.
        return _user_has_project_perm(self, perm, project)
# auth_backends.py

from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.models import Permission


class ProjectBackend(ModelBackend):
    """A backend that understands project-specific authorization."""

    def _get_user_project_permissions(self, user_obj, project):
        return Permission.objects.filter(
            project_permission__users=user_obj, project_permission__project=project
        )

    def _get_group_project_permissions(self, user_obj, project):
        user_groups_field = get_user_model()._meta.get_field("groups")
        user_groups_query = (
            "project_permission__groups__%s" % user_groups_field.related_query_name()
        )
        return Permission.objects.filter(
            **{user_groups_query: user_obj}, project_permission__project=project
        )

    def _get_project_permissions(self, user_obj, project, from_name):
        if not user_obj.is_active or user_obj.is_anonymous:
            return set()

        perm_cache_name = f"_{from_name}_project_{project.pk}_perm_cache"
        if not hasattr(user_obj, perm_cache_name):
            if user_obj.is_superuser:
                perms = Permission.objects.all()
            else:
                perms = getattr(self, "_get_%s_project_permissions" % from_name)(
                    user_obj, project
                )
            perms = perms.values_list("content_type__app_label", "codename").order_by()
            setattr(
                user_obj, perm_cache_name, {"%s.%s" % (ct, name) for ct, name in perms}
            )
        return getattr(user_obj, perm_cache_name)

    def get_user_project_permissions(self, user_obj, project):
        return self._get_project_permissions(user_obj, project, "user")

    def get_group_project_permissions(self, user_obj, project):
        return self._get_project_permissions(user_obj, project, "group")

    def get_all_project_permissions(self, user_obj, project):
        return {
            *self.get_user_project_permissions(user_obj, project),
            *self.get_group_project_permissions(user_obj, project),
            *self.get_user_permissions(user_obj),
            *self.get_group_permissions(user_obj),
        }

    def has_project_perm(self, user_obj, perm, project):
        return perm in self.get_all_project_permissions(user_obj, project)
# settings.py

AUTHENTICATION_BACKENDS = ["django_project.projects.auth_backends.ProjectBackend"]
-----------------------
# models.py

from django.contrib import auth
from django.contrib.auth.models import AbstractUser, Group, Permission
from django.core.exceptions import PermissionDenied
from django.db import models

from showbase.users.models import User


class ProjectPermission(models.Model):
    """A permission that is valid for a specific project."""

    project = models.ForeignKey(Project, on_delete=models.CASCADE)
    base_permission = models.ForeignKey(
        Permission, on_delete=models.CASCADE, related_name="project_permission"
    )
    users = models.ManyToManyField(User, related_name="user_project_permissions")
    groups = models.ManyToManyField(Group, related_name="project_permissions")

    class Meta:
        indexes = [models.Index(fields=["project", "base_permission"])]
        unique_together = ["project", "base_permission"]


def _user_has_project_perm(user, perm, project):
    """
    A backend can raise `PermissionDenied` to short-circuit permission checking.
    """
    for backend in auth.get_backends():
        if not hasattr(backend, "has_project_perm"):
            continue
        try:
            if backend.has_project_perm(user, perm, project):
                return True
        except PermissionDenied:
            return False
    return False


class User(AbstractUser):
    def has_project_perm(self, perm, project):
        """Return True if the user has the specified permission in a project."""
        # Active superusers have all permissions.
        if self.is_active and self.is_superuser:
            return True

        # Otherwise we need to check the backends.
        return _user_has_project_perm(self, perm, project)
# auth_backends.py

from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.models import Permission


class ProjectBackend(ModelBackend):
    """A backend that understands project-specific authorization."""

    def _get_user_project_permissions(self, user_obj, project):
        return Permission.objects.filter(
            project_permission__users=user_obj, project_permission__project=project
        )

    def _get_group_project_permissions(self, user_obj, project):
        user_groups_field = get_user_model()._meta.get_field("groups")
        user_groups_query = (
            "project_permission__groups__%s" % user_groups_field.related_query_name()
        )
        return Permission.objects.filter(
            **{user_groups_query: user_obj}, project_permission__project=project
        )

    def _get_project_permissions(self, user_obj, project, from_name):
        if not user_obj.is_active or user_obj.is_anonymous:
            return set()

        perm_cache_name = f"_{from_name}_project_{project.pk}_perm_cache"
        if not hasattr(user_obj, perm_cache_name):
            if user_obj.is_superuser:
                perms = Permission.objects.all()
            else:
                perms = getattr(self, "_get_%s_project_permissions" % from_name)(
                    user_obj, project
                )
            perms = perms.values_list("content_type__app_label", "codename").order_by()
            setattr(
                user_obj, perm_cache_name, {"%s.%s" % (ct, name) for ct, name in perms}
            )
        return getattr(user_obj, perm_cache_name)

    def get_user_project_permissions(self, user_obj, project):
        return self._get_project_permissions(user_obj, project, "user")

    def get_group_project_permissions(self, user_obj, project):
        return self._get_project_permissions(user_obj, project, "group")

    def get_all_project_permissions(self, user_obj, project):
        return {
            *self.get_user_project_permissions(user_obj, project),
            *self.get_group_project_permissions(user_obj, project),
            *self.get_user_permissions(user_obj),
            *self.get_group_permissions(user_obj),
        }

    def has_project_perm(self, user_obj, perm, project):
        return perm in self.get_all_project_permissions(user_obj, project)
# settings.py

AUTHENTICATION_BACKENDS = ["django_project.projects.auth_backends.ProjectBackend"]
-----------------------
# models.py

from django.contrib import auth
from django.contrib.auth.models import AbstractUser, Group, Permission
from django.core.exceptions import PermissionDenied
from django.db import models

from showbase.users.models import User


class ProjectPermission(models.Model):
    """A permission that is valid for a specific project."""

    project = models.ForeignKey(Project, on_delete=models.CASCADE)
    base_permission = models.ForeignKey(
        Permission, on_delete=models.CASCADE, related_name="project_permission"
    )
    users = models.ManyToManyField(User, related_name="user_project_permissions")
    groups = models.ManyToManyField(Group, related_name="project_permissions")

    class Meta:
        indexes = [models.Index(fields=["project", "base_permission"])]
        unique_together = ["project", "base_permission"]


def _user_has_project_perm(user, perm, project):
    """
    A backend can raise `PermissionDenied` to short-circuit permission checking.
    """
    for backend in auth.get_backends():
        if not hasattr(backend, "has_project_perm"):
            continue
        try:
            if backend.has_project_perm(user, perm, project):
                return True
        except PermissionDenied:
            return False
    return False


class User(AbstractUser):
    def has_project_perm(self, perm, project):
        """Return True if the user has the specified permission in a project."""
        # Active superusers have all permissions.
        if self.is_active and self.is_superuser:
            return True

        # Otherwise we need to check the backends.
        return _user_has_project_perm(self, perm, project)
# auth_backends.py

from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.models import Permission


class ProjectBackend(ModelBackend):
    """A backend that understands project-specific authorization."""

    def _get_user_project_permissions(self, user_obj, project):
        return Permission.objects.filter(
            project_permission__users=user_obj, project_permission__project=project
        )

    def _get_group_project_permissions(self, user_obj, project):
        user_groups_field = get_user_model()._meta.get_field("groups")
        user_groups_query = (
            "project_permission__groups__%s" % user_groups_field.related_query_name()
        )
        return Permission.objects.filter(
            **{user_groups_query: user_obj}, project_permission__project=project
        )

    def _get_project_permissions(self, user_obj, project, from_name):
        if not user_obj.is_active or user_obj.is_anonymous:
            return set()

        perm_cache_name = f"_{from_name}_project_{project.pk}_perm_cache"
        if not hasattr(user_obj, perm_cache_name):
            if user_obj.is_superuser:
                perms = Permission.objects.all()
            else:
                perms = getattr(self, "_get_%s_project_permissions" % from_name)(
                    user_obj, project
                )
            perms = perms.values_list("content_type__app_label", "codename").order_by()
            setattr(
                user_obj, perm_cache_name, {"%s.%s" % (ct, name) for ct, name in perms}
            )
        return getattr(user_obj, perm_cache_name)

    def get_user_project_permissions(self, user_obj, project):
        return self._get_project_permissions(user_obj, project, "user")

    def get_group_project_permissions(self, user_obj, project):
        return self._get_project_permissions(user_obj, project, "group")

    def get_all_project_permissions(self, user_obj, project):
        return {
            *self.get_user_project_permissions(user_obj, project),
            *self.get_group_project_permissions(user_obj, project),
            *self.get_user_permissions(user_obj),
            *self.get_group_permissions(user_obj),
        }

    def has_project_perm(self, user_obj, perm, project):
        return perm in self.get_all_project_permissions(user_obj, project)
# settings.py

AUTHENTICATION_BACKENDS = ["django_project.projects.auth_backends.ProjectBackend"]

How to implement a global exception handling in Raku Cro Application

copy iconCopydownload iconDownload
my $application = route {
    around -> &handler {
        # Invoke the route handler
        handler();
        CATCH {
            # If any handler produces this exception...
            when Some::Domain::Exception::UpdatingOldVersion {
                # ...return a HTTP 409 Conflict response.
                conflict;
            }
        }
    }

    # Put your get, post, etc. here.
}

Distinguishing between Pydantic Models with same fields

copy iconCopydownload iconDownload
from abc import ABC
from pydantic import BaseModel
from typing import Union, Literal

class Data(BaseModel, ABC):
    """ base class for a Member """
    number: float


class DataA(Data):
    """ A type of Data"""
    tag: Literal['A'] = 'A'


class DataB(Data):
    """ Another type of Data """
    tag: Literal['B'] = 'B'


class Container(BaseModel):
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]


# initialize container with DataB
data = DataB(number=1.0)
container = Container(data=data)

# export container to string and load new container from string
string = container.json()
new_container = Container.parse_raw(string)


# look at type of container.data
print(type(new_container.data).__name__)
# >>> DataB
from pydantic.fields import ModelField

class Data(BaseModel, ABC):
    """ base class for a Member """
    number: float

    def __init_subclass__(cls, **kwargs):
        name = 'tag'
        value = cls.__name__
        annotation = Literal[value]

        tag_field = ModelField.infer(name=name, value=value, annotation=annotation, class_validators=None, config=cls.__config__)
        cls.__fields__[name] = tag_field
        cls.__annotations__[name] = annotation


class DataA(Data):
    """ A type of Data"""
    pass


class DataB(Data):
    """ Another type of Data """
    pass
-----------------------
from abc import ABC
from pydantic import BaseModel
from typing import Union, Literal

class Data(BaseModel, ABC):
    """ base class for a Member """
    number: float


class DataA(Data):
    """ A type of Data"""
    tag: Literal['A'] = 'A'


class DataB(Data):
    """ Another type of Data """
    tag: Literal['B'] = 'B'


class Container(BaseModel):
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]


# initialize container with DataB
data = DataB(number=1.0)
container = Container(data=data)

# export container to string and load new container from string
string = container.json()
new_container = Container.parse_raw(string)


# look at type of container.data
print(type(new_container.data).__name__)
# >>> DataB
from pydantic.fields import ModelField

class Data(BaseModel, ABC):
    """ base class for a Member """
    number: float

    def __init_subclass__(cls, **kwargs):
        name = 'tag'
        value = cls.__name__
        annotation = Literal[value]

        tag_field = ModelField.infer(name=name, value=value, annotation=annotation, class_validators=None, config=cls.__config__)
        cls.__fields__[name] = tag_field
        cls.__annotations__[name] = annotation


class DataA(Data):
    """ A type of Data"""
    pass


class DataB(Data):
    """ Another type of Data """
    pass
-----------------------
def register_distinct_subclasses(fields: tuple):
    """ fields is tuple of subclasses that we want to be registered as distinct """

    field_map = {field.__name__: field for field in fields}

    def _register_distinct_subclasses(cls):
        """ cls is the superclass of fields, which we add a new validator to """

        orig_init = cls.__init__

        class _class:
            class_name: str

            def __init__(self, **kwargs):
                class_name = type(self).__name__
                kwargs["class_name"] = class_name
                orig_init(**kwargs)

            @classmethod
            def __get_validators__(cls):
                yield cls.validate

            @classmethod
            def validate(cls, v):
                if isinstance(v, dict):
                    class_name = v.get("class_name")
                    json_string = json.dumps(v)
                else:
                    class_name = v.class_name
                    json_string = v.json()
                cls_type = field_map[class_name]
                return cls_type.parse_raw(json_string)

        return _class
    return _register_distinct_subclasses
Data = register_distinct_subclasses((DataA, DataB))(Data)
-----------------------
def register_distinct_subclasses(fields: tuple):
    """ fields is tuple of subclasses that we want to be registered as distinct """

    field_map = {field.__name__: field for field in fields}

    def _register_distinct_subclasses(cls):
        """ cls is the superclass of fields, which we add a new validator to """

        orig_init = cls.__init__

        class _class:
            class_name: str

            def __init__(self, **kwargs):
                class_name = type(self).__name__
                kwargs["class_name"] = class_name
                orig_init(**kwargs)

            @classmethod
            def __get_validators__(cls):
                yield cls.validate

            @classmethod
            def validate(cls, v):
                if isinstance(v, dict):
                    class_name = v.get("class_name")
                    json_string = json.dumps(v)
                else:
                    class_name = v.class_name
                    json_string = v.json()
                cls_type = field_map[class_name]
                return cls_type.parse_raw(json_string)

        return _class
    return _register_distinct_subclasses
Data = register_distinct_subclasses((DataA, DataB))(Data)
-----------------------
from abc import ABC
from dataclasses import dataclass
from typing import Union

from dataclass_wizard import JSONWizard


@dataclass
class Data(ABC):
    """ base class for a Member """
    number: float


class DataA(Data, JSONWizard):
    """ A type of Data"""

    class _(JSONWizard.Meta):
        """
        This defines a custom tag that uniquely identifies the dataclass.
        """
        tag = 'A'


class DataB(Data, JSONWizard):
    """ Another type of Data """

    class _(JSONWizard.Meta):
        """
        This defines a custom tag that uniquely identifies the dataclass.
        """
        tag = 'B'


@dataclass
class Container(JSONWizard):
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]
print('== Load with DataA ==')

input_dict = {
    'data': {
        'number': '1.0',
        '__tag__': 'A'
    }
}

# De-serialize the `dict` object to a `Container` instance.
container = Container.from_dict(input_dict)

print(repr(container))
# prints:
#   Container(data=DataA(number=1.0))

# Show the prettified JSON representation of the instance.
print(container)

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataA

print()

print('== Load with DataB ==')

# initialize container with DataB
data_b = DataB(number=2.0)
container = Container(data=data_b)

print(repr(container))
# prints:
#   Container(data=DataB(number=2.0))

# Show the prettified JSON representation of the instance.
print(container)

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataB

# Assert we end up with the same instance when serializing and de-serializing
# our data.
string = container.to_json()
assert container == Container.from_json(string)
from abc import ABC
from dataclasses import dataclass
from typing import Union

from dataclass_wizard import asdict, fromdict, LoadMeta


@dataclass
class Data(ABC):
    """ base class for a Member """
    number: float


class DataA(Data):
    """ A type of Data"""


class DataB(Data):
    """ Another type of Data """


@dataclass
class Container:
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]


# Setup tags for the dataclasses. This can be passed into either
# `LoadMeta` or `DumpMeta`.
#
# Note that I'm not a fan of this syntax either, so it might change. I was
# thinking of something more explicit, like `LoadMeta(...).bind_to(class)`
LoadMeta(DataA, tag='A')
LoadMeta(DataB, tag='B')

# The rest is the same as before.

# initialize container with DataB
data = DataB(number=2.0)
container = Container(data=data)

print(repr(container))
# prints:
#   Container(data=DataB(number=2.0))

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataB

# Assert we end up with the same data when serializing and de-serializing.
out_dict = asdict(container)
assert container == fromdict(Container, out_dict)
-----------------------
from abc import ABC
from dataclasses import dataclass
from typing import Union

from dataclass_wizard import JSONWizard


@dataclass
class Data(ABC):
    """ base class for a Member """
    number: float


class DataA(Data, JSONWizard):
    """ A type of Data"""

    class _(JSONWizard.Meta):
        """
        This defines a custom tag that uniquely identifies the dataclass.
        """
        tag = 'A'


class DataB(Data, JSONWizard):
    """ Another type of Data """

    class _(JSONWizard.Meta):
        """
        This defines a custom tag that uniquely identifies the dataclass.
        """
        tag = 'B'


@dataclass
class Container(JSONWizard):
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]
print('== Load with DataA ==')

input_dict = {
    'data': {
        'number': '1.0',
        '__tag__': 'A'
    }
}

# De-serialize the `dict` object to a `Container` instance.
container = Container.from_dict(input_dict)

print(repr(container))
# prints:
#   Container(data=DataA(number=1.0))

# Show the prettified JSON representation of the instance.
print(container)

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataA

print()

print('== Load with DataB ==')

# initialize container with DataB
data_b = DataB(number=2.0)
container = Container(data=data_b)

print(repr(container))
# prints:
#   Container(data=DataB(number=2.0))

# Show the prettified JSON representation of the instance.
print(container)

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataB

# Assert we end up with the same instance when serializing and de-serializing
# our data.
string = container.to_json()
assert container == Container.from_json(string)
from abc import ABC
from dataclasses import dataclass
from typing import Union

from dataclass_wizard import asdict, fromdict, LoadMeta


@dataclass
class Data(ABC):
    """ base class for a Member """
    number: float


class DataA(Data):
    """ A type of Data"""


class DataB(Data):
    """ Another type of Data """


@dataclass
class Container:
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]


# Setup tags for the dataclasses. This can be passed into either
# `LoadMeta` or `DumpMeta`.
#
# Note that I'm not a fan of this syntax either, so it might change. I was
# thinking of something more explicit, like `LoadMeta(...).bind_to(class)`
LoadMeta(DataA, tag='A')
LoadMeta(DataB, tag='B')

# The rest is the same as before.

# initialize container with DataB
data = DataB(number=2.0)
container = Container(data=data)

print(repr(container))
# prints:
#   Container(data=DataB(number=2.0))

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataB

# Assert we end up with the same data when serializing and de-serializing.
out_dict = asdict(container)
assert container == fromdict(Container, out_dict)
-----------------------
from abc import ABC
from dataclasses import dataclass
from typing import Union

from dataclass_wizard import JSONWizard


@dataclass
class Data(ABC):
    """ base class for a Member """
    number: float


class DataA(Data, JSONWizard):
    """ A type of Data"""

    class _(JSONWizard.Meta):
        """
        This defines a custom tag that uniquely identifies the dataclass.
        """
        tag = 'A'


class DataB(Data, JSONWizard):
    """ Another type of Data """

    class _(JSONWizard.Meta):
        """
        This defines a custom tag that uniquely identifies the dataclass.
        """
        tag = 'B'


@dataclass
class Container(JSONWizard):
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]
print('== Load with DataA ==')

input_dict = {
    'data': {
        'number': '1.0',
        '__tag__': 'A'
    }
}

# De-serialize the `dict` object to a `Container` instance.
container = Container.from_dict(input_dict)

print(repr(container))
# prints:
#   Container(data=DataA(number=1.0))

# Show the prettified JSON representation of the instance.
print(container)

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataA

print()

print('== Load with DataB ==')

# initialize container with DataB
data_b = DataB(number=2.0)
container = Container(data=data_b)

print(repr(container))
# prints:
#   Container(data=DataB(number=2.0))

# Show the prettified JSON representation of the instance.
print(container)

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataB

# Assert we end up with the same instance when serializing and de-serializing
# our data.
string = container.to_json()
assert container == Container.from_json(string)
from abc import ABC
from dataclasses import dataclass
from typing import Union

from dataclass_wizard import asdict, fromdict, LoadMeta


@dataclass
class Data(ABC):
    """ base class for a Member """
    number: float


class DataA(Data):
    """ A type of Data"""


class DataB(Data):
    """ Another type of Data """


@dataclass
class Container:
    """ container holds a subclass of Data """
    data: Union[DataA, DataB]


# Setup tags for the dataclasses. This can be passed into either
# `LoadMeta` or `DumpMeta`.
#
# Note that I'm not a fan of this syntax either, so it might change. I was
# thinking of something more explicit, like `LoadMeta(...).bind_to(class)`
LoadMeta(DataA, tag='A')
LoadMeta(DataB, tag='B')

# The rest is the same as before.

# initialize container with DataB
data = DataB(number=2.0)
container = Container(data=data)

print(repr(container))
# prints:
#   Container(data=DataB(number=2.0))

# Assert we load the correct dataclass from the annotated `Union` types
assert type(container.data) == DataB

# Assert we end up with the same data when serializing and de-serializing.
out_dict = asdict(container)
assert container == fromdict(Container, out_dict)

Does Django's select_for_update method work with the update method?

copy iconCopydownload iconDownload
with transaction.atomic():
    qs = polls.models.Question.objects.select_for_update().all()
    qs.update(question_text='test')

print(connection.queries)
# {'sql': 'UPDATE "polls_question" SET "question_text" = \'test\'', 'time': '0.008'}

with transaction.atomic():
    qs = polls.models.Question.objects.select_for_update().all()
    list(qs) # cause evaluation, locking the selected rows
    qs.update(question_text='test')

print(connection.queries)
#[...
# {'sql': 'SELECT "polls_question"."id", "polls_question"."question_text", "polls_question"."pub_date" FROM "polls_question" FOR UPDATE', 'time': '0.003'},
# {'sql': 'UPDATE "polls_question" SET "question_text" = \'test\'', 'time': '0.001'}
#]
-----------------------
with transaction.atomic():
    qs = polls.models.Question.objects.select_for_update().all()
    qs.update(question_text='test')

print(connection.queries)
# {'sql': 'UPDATE "polls_question" SET "question_text" = \'test\'', 'time': '0.008'}

with transaction.atomic():
    qs = polls.models.Question.objects.select_for_update().all()
    list(qs) # cause evaluation, locking the selected rows
    qs.update(question_text='test')

print(connection.queries)
#[...
# {'sql': 'SELECT "polls_question"."id", "polls_question"."question_text", "polls_question"."pub_date" FROM "polls_question" FOR UPDATE', 'time': '0.003'},
# {'sql': 'UPDATE "polls_question" SET "question_text" = \'test\'', 'time': '0.001'}
#]

Community Discussions

Trending Discussions on models
  • Entity Framework | Sequence contains more than one matching element
  • Keras AttributeError: 'Sequential' object has no attribute 'predict_classes'
  • cannot import name '_registerMatType' from 'cv2.cv2'
  • How to use SageMaker Estimator for model training and saving
  • How to fix the error (TypeError: Cannot assign to read only property 'map' of object '#&lt;QueryCursor&gt;')
  • Google app engine deployment fails- Error while finding module specification for 'pip' (AttributeError: module '__main__' has no attribute '__file__')
  • Laravel eloquent with multiple inner joins
  • ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'
  • How to setup Django permissions to be specific to a certain model's instances?
  • How to implement a global exception handling in Raku Cro Application
Trending Discussions on models

QUESTION

Entity Framework | Sequence contains more than one matching element

Asked 2022-Mar-31 at 09:23

I used the database first approach. The model is right (or at least it looks like) But I always get this error. Please, I've already tried so many things.. The full code of my program (and even sql script by which I create my database) is here: https://github.com/AntonioParroni/test-task-for-backend-stack/blob/main/Server/Models/ApplicationContext.cs

Since I have a mac. I created my model with dotnet ef cli commands (dbcontext scaffold) I can use my context. But I can't touch any DbSet..

public static void Main(string[] args)
    {
        using (ApplicationContext context = new ApplicationContext())
        {
            Console.WriteLine(context.Database.CanConnect());
            var months = context.Months.ToList();
            foreach (var month in months)
            {
                Console.WriteLine(month.MonthName);
            }
        }
        //CreateHostBuilder(args).Build().Run();
    }

It is not my first time using EF. And everything was working fine before, in many simple projects or tasks. While here.... It doesn't matter what I do (I even tried to rename all of my columns name, erase all tables except one, modify the context code, use the same steps from this project on a new, totally empty project..) It is always..

Unhandled exception. System.TypeInitializationException: The type initializer for 'Microsoft.EntityFrameworkCore.Query.Internal.NavigationExpandingExpressionVisitor' threw an exception.
 ---> System.TypeInitializationException: The type initializer for 'Microsoft.EntityFrameworkCore.Query.QueryableMethods' threw an exception.
     ---> System.InvalidOperationException: Sequence contains more than one matching element
       at System.Linq.ThrowHelper.ThrowMoreThanOneMatchException() in System.Linq.dll:token 0x600041c+0xa
ect....

Here is my package reference file

"restore":{"projectUniqueName":"/Users/mac/Documents/GitHub/test-task-for-backend-stack/Server/Server.csproj",
    "projectName":"Server","projectPath":"/Users/mac/Documents/GitHub/test-task-for-backend-stack/Server/Server.csproj",
    "outputPath":"/Users/mac/Documents/GitHub/test-task-for-backend-stack/Server/obj/","projectStyle":
    "PackageReference","originalTargetFrameworks":["net6.0"],"sources":{"https://api.nuget.org/v3/index.json":{}},
    "frameworks":{"net6.0":{"targetAlias":"net6.0","projectReferences":{}}},
    "warningProperties":{"warnAsError":["NU1605"]}}"frameworks":{"net6.0":
        {"targetAlias":"net6.0","dependencies":{"EntityFramework":
            {"target":"Package","version":"[6.4.4, )"},
            "Microsoft.EntityFrameworkCore.Design":{"include":"Runtime, Build, Native, ContentFiles, Analyzers, BuildTransitive",
            "suppressParent":"All","target":"Package","version":"[5.0.0, )"},
            "Microsoft.EntityFrameworkCore.SqlServer":{"target":"Package","version":"[5.0.0, )"},
            "Swashbuckle.AspNetCore":{"target":"Package","version":"[5.6.3, )"}},
            "imports":["net461","net462","net47","net471","net472","net48"],
            "assetTargetFallback":true,"warn":true,
            "frameworkReferences":{"Microsoft.AspNetCore.App":{"privateAssets":"none"},
            "Microsoft.NETCore.App":{"privateAssets":"all"}},
        "runtimeIdentifierGraphPath":"/usr/local/share/dotnet/sdk/6.0.100-preview.6.21355.2/RuntimeIdentifierGraph.json"}}

It's been already a few days. And I'm becoming really confused and mad. Why is this happening.. and why there is not that much info about this type of error in the internet. Please, just point me in the right direction..

ANSWER

Answered 2022-Mar-31 at 09:23

You have net6.0 target framework which is still not released while you have installed EF6 which is a previous iteration Entity Framework (mainly used with legacy .NET Framework projects) and you also have EF Core (a modern iteration of it) but older version - 5.0 (which you are actually using for your context, see the using Microsoft.EntityFrameworkCore; statements there).

Try removing EntityFramework package and installing preview version of Microsoft.EntityFrameworkCore.SqlServer (possibly just updating to the latest 5 version also can help) and either removing completely or installing preview version of Microsoft.EntityFrameworkCore.Design. (Also I would recommend to update your SDK to rc and install rc versions of packages).

Or try removing the reference to EntityFramework (not Core one) and changing target framework to net5.0 (if you have it installed on your machine).

As for why do you see this exception - I would guess it is related to new methods added to Queryable in .NET 6 which made one of this checks to fail.

TL;DR

As mentioned in the comments - update EF Core to the corresponding latest version (worked for 5.0 and 3.1) or update to .NET 6.0 and EF Core 6.

Source https://stackoverflow.com/questions/69236321

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install models

You can install using 'pip install models' or download it from GitHub, PyPI.
You can use models like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

Support

If you want to contribute, please review the contribution guidelines.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with models
Compare Machine Learning Libraries with Highest Support
Compare Machine Learning Libraries with Highest Quality
Compare Machine Learning Libraries with Highest Security
Compare Machine Learning Libraries with Permissive License
Compare Machine Learning Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.