kandi background
Explore Kits

pickle | PHP Extension installer | Build Tool library

 by   FriendsOfPHP PHP Version: v0.7.2 License: Non-SPDX

 by   FriendsOfPHP PHP Version: v0.7.2 License: Non-SPDX

Download this library from

kandi X-RAY | pickle Summary

pickle is a PHP library typically used in Utilities, Build Tool, Composer applications. pickle has no bugs, it has no vulnerabilities and it has medium support. However pickle has a Non-SPDX License. You can download it from GitHub.
pickle - PHP Extension installer [![SensioLabsInsight](https://insight.sensiolabs.com/projects/7e153d04-79be-47e6-b2ee-60cdc2665dd5/small.png)](https://insight.sensiolabs.com/projects/7e153d04-79be-47e6-b2ee-60cdc2665dd5).
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • pickle has a medium active ecosystem.
  • It has 1428 star(s) with 75 fork(s). There are 61 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 20 open issues and 88 have been closed. On average issues are closed in 555 days. There are 3 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of pickle is v0.7.2
pickle Support
Best in #Build Tool
Average in #Build Tool
pickle Support
Best in #Build Tool
Average in #Build Tool

quality kandi Quality

  • pickle has 0 bugs and 0 code smells.
pickle Quality
Best in #Build Tool
Average in #Build Tool
pickle Quality
Best in #Build Tool
Average in #Build Tool

securitySecurity

  • pickle has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • pickle code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
pickle Security
Best in #Build Tool
Average in #Build Tool
pickle Security
Best in #Build Tool
Average in #Build Tool

license License

  • pickle has a Non-SPDX License.
  • Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
pickle License
Best in #Build Tool
Average in #Build Tool
pickle License
Best in #Build Tool
Average in #Build Tool

buildReuse

  • pickle releases are available to install and integrate.
  • Installation instructions are not available. Examples and code snippets are available.
pickle Reuse
Best in #Build Tool
Average in #Build Tool
pickle Reuse
Best in #Build Tool
Average in #Build Tool
Top functions reviewed by kandi - BETA

kandi has reviewed pickle and discovered the below as its top functions. This is intended to give you an instant insight into pickle implemented functionality, and help decide if they suit your requirements.

  • Load a configuration file .
    • Create a new package
      • Build the options .
        • Install windows .
          • Returns information about the configuration file .
            • Fetches an argument from the config .
              • Setup the position of the pickle section
                • Read a package from a path .
                  • Fetches the DLL mapping file .
                    • Get info from PHP constants

                      Get all kandi verified functions for this library.

                      Get all kandi verified functions for this library.

                      pickle Key Features

                      PHP Extension installer

                      default

                      copy iconCopydownload iconDownload
                      Grab the latest phar at https://github.com/FriendsOfPHP/pickle/releases/latest
                      ```sh
                      wget https://github.com/FriendsOfPHP/pickle/releases/latest/download/pickle.phar
                      ```
                      
                      and run using
                      ```sh
                      $ php pickle.phar
                      ```
                      or add the execute flag
                      ```sh
                      $ chmod +x pickle.phar
                      ```
                      then run as:
                      ```sh
                      $ pickle.phar info apcu
                      ```
                      You can also rename the phar to "pickle"
                      ```sh
                      $ mv pickle.phar pickle
                      ```
                      so it can be called using pickle only.
                      
                      And finally you can add it to your path or copy it in /usr/local/bin or your favorite binary directory.
                      
                      On windows, use
                      ```sh
                      $ php pickle.phar
                      ```
                      or create a .bat containing:
                      ```
                      @echo OFF
                      setlocal DISABLEDELAYEDEXPANSION
                      c:\path\to\php.exe "c:\path\to\pickle.phar" %*
                      ```
                      
                      If someone would be kind enough to write an installer script, we would be eternally thankful :)
                      
                      Introduction

                      Pickle and Numpy versions

                      copy iconCopydownload iconDownload
                      import joblib
                      model = pickle.load(open('model.pkl', "rb"), encoding="latin1")
                      joblib.dump(model.tree_.get_arrays()[0], "training_data.pkl")
                      
                      import joblib
                      from sklearn.neighbors import KernelDensity
                      
                      data = joblib.load("training_data.pkl")
                      kde = KernelDensity(
                            algorithm="auto",
                            atol=0,
                            bandwidth=0.5,
                            breadth_first=True,
                            kernel="gaussian",
                            leaf_size=40,
                            metric="euclidean",
                            metric_params=None,
                            rtol=0
                      ).fit(data)
                      
                      with open("new_model.pkl", "wb") as f:
                          pickle.dump(kde, f)
                      
                      import joblib
                      model = pickle.load(open('model.pkl', "rb"), encoding="latin1")
                      joblib.dump(model.tree_.get_arrays()[0], "training_data.pkl")
                      
                      import joblib
                      from sklearn.neighbors import KernelDensity
                      
                      data = joblib.load("training_data.pkl")
                      kde = KernelDensity(
                            algorithm="auto",
                            atol=0,
                            bandwidth=0.5,
                            breadth_first=True,
                            kernel="gaussian",
                            leaf_size=40,
                            metric="euclidean",
                            metric_params=None,
                            rtol=0
                      ).fit(data)
                      
                      with open("new_model.pkl", "wb") as f:
                          pickle.dump(kde, f)
                      

                      TorchText Vocab TypeError: Vocab.__init__() got an unexpected keyword argument 'min_freq'

                      copy iconCopydownload iconDownload
                      from torchtext.datasets import IMDB
                      from collections import Counter
                      from torchtext.data.utils import get_tokenizer
                      from torchtext.vocab import vocab
                      tokenizer = get_tokenizer('basic_english')  
                      train_iter = IMDB(split='train')
                      test_iter = IMDB(split='test')
                      counter = Counter()
                      for (label, line) in train_iter:
                          counter.update(tokenizer(line))
                      vocab = vocab(counter, min_freq = 1, specials=('\<unk\>', '\<BOS\>', '\<EOS\>', '\<PAD\>'))
                      

                      Using RNN Trained Model without pytorch installed

                      copy iconCopydownload iconDownload
                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                        def forward(self, x, h0, c0):
                          lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
                          outs=self.hidden2out(lstm_out)
                          return outs, (hidden_a, hidden_b)
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
                      
                      # convert the arguments passed during onnx.export call
                      class MWrapper(nn.Module):
                          def __init__(self, model):
                              super(MWrapper, self).__init__()
                              self.model = model;
                          def forward(self, kwargs):
                              return self.model(**kwargs)
                      
                      rnn = RNN(10, 10, 10, 3)
                      X = torch.randn(3,1,10)
                      h0,c0  = rnn.init_hidden()
                      print(rnn(X, h0, c0)[0])
                      
                      
                      torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                                        dynamic_axes={'x':{1:'N'},
                                                     'c0':{1: 'N'},
                                                     'h0':{1: 'N'}
                                                     },
                                        input_names=['x', 'h0', 'c0'],
                                        output_names=['y', 'hn', 'cn']
                                       )
                      
                      import onnxruntime
                      ort_model = onnxruntime.InferenceSession('rnn.onnx')
                      print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
                      
                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                        def forward(self, x, h0, c0):
                          lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
                          outs=self.hidden2out(lstm_out)
                          return outs, (hidden_a, hidden_b)
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
                      
                      # convert the arguments passed during onnx.export call
                      class MWrapper(nn.Module):
                          def __init__(self, model):
                              super(MWrapper, self).__init__()
                              self.model = model;
                          def forward(self, kwargs):
                              return self.model(**kwargs)
                      
                      rnn = RNN(10, 10, 10, 3)
                      X = torch.randn(3,1,10)
                      h0,c0  = rnn.init_hidden()
                      print(rnn(X, h0, c0)[0])
                      
                      
                      torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                                        dynamic_axes={'x':{1:'N'},
                                                     'c0':{1: 'N'},
                                                     'h0':{1: 'N'}
                                                     },
                                        input_names=['x', 'h0', 'c0'],
                                        output_names=['y', 'hn', 'cn']
                                       )
                      
                      import onnxruntime
                      ort_model = onnxruntime.InferenceSession('rnn.onnx')
                      print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
                      
                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                        def forward(self, x, h0, c0):
                          lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
                          outs=self.hidden2out(lstm_out)
                          return outs, (hidden_a, hidden_b)
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
                      
                      # convert the arguments passed during onnx.export call
                      class MWrapper(nn.Module):
                          def __init__(self, model):
                              super(MWrapper, self).__init__()
                              self.model = model;
                          def forward(self, kwargs):
                              return self.model(**kwargs)
                      
                      rnn = RNN(10, 10, 10, 3)
                      X = torch.randn(3,1,10)
                      h0,c0  = rnn.init_hidden()
                      print(rnn(X, h0, c0)[0])
                      
                      
                      torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                                        dynamic_axes={'x':{1:'N'},
                                                     'c0':{1: 'N'},
                                                     'h0':{1: 'N'}
                                                     },
                                        input_names=['x', 'h0', 'c0'],
                                        output_names=['y', 'hn', 'cn']
                                       )
                      
                      import onnxruntime
                      ort_model = onnxruntime.InferenceSession('rnn.onnx')
                      print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
                      
                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                        def forward(self, x, h0, c0):
                          lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
                          outs=self.hidden2out(lstm_out)
                          return outs, (hidden_a, hidden_b)
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
                      
                      # convert the arguments passed during onnx.export call
                      class MWrapper(nn.Module):
                          def __init__(self, model):
                              super(MWrapper, self).__init__()
                              self.model = model;
                          def forward(self, kwargs):
                              return self.model(**kwargs)
                      
                      rnn = RNN(10, 10, 10, 3)
                      X = torch.randn(3,1,10)
                      h0,c0  = rnn.init_hidden()
                      print(rnn(X, h0, c0)[0])
                      
                      
                      torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                                        dynamic_axes={'x':{1:'N'},
                                                     'c0':{1: 'N'},
                                                     'h0':{1: 'N'}
                                                     },
                                        input_names=['x', 'h0', 'c0'],
                                        output_names=['y', 'hn', 'cn']
                                       )
                      
                      import onnxruntime
                      ort_model = onnxruntime.InferenceSession('rnn.onnx')
                      print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      
                      #Set Parameters for a small LSTM network
                      input_size  = 2 # size of one 'event', or sample, in our batch of data
                      hidden_dim  = 3 # 3 cells in the LSTM layer
                      output_size = 1 # desired model output
                      
                      num_layers=3
                      torch_lstm = RNN( input_size, 
                                       hidden_dim ,
                                       output_size,
                                       num_layers,
                                       matching_in_out=True
                                       )
                      
                      state = torch_lstm.state_dict() # state will capture the weights of your model
                      
                      ### NOT MY CODE
                      import numpy as np 
                      from scipy.special import expit as sigmoid
                      
                      def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
                          forget_hidden  = np.dot(Weights_hf, h) + Bias_hf
                          forget_eventx  = np.dot(Weights_xf, x) + Bias_xf
                          return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
                      
                      def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
                          ignore_hidden  = np.dot(Weights_hi, h) + Bias_hi
                          ignore_eventx  = np.dot(Weights_xi, x) + Bias_xi
                          learn_hidden   = np.dot(Weights_hl, h) + Bias_hl
                          learn_eventx   = np.dot(Weights_xl, x) + Bias_xl
                          return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
                      
                      
                      def cell_state(forget_gate_output, input_gate_output):
                          return forget_gate_output + input_gate_output
                      
                        
                      def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
                          out_hidden = np.dot(Weights_ho, h) + Bias_ho
                          out_eventx = np.dot(Weights_xo, x) + Bias_xo
                          return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
                      
                      
                      def sigmoid(x):
                          return 1/(1 + np.exp(-x))
                      
                      def get_slices(hidden_dim):
                          slices=[]
                          breaker=(hidden_dim*4)
                          slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
                          return slices
                      
                      class numpy_lstm:
                          def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
                              self.matching_in_out=matching_in_out
                              self.layer_num=layer_num
                              self.hidden_dim=hidden_dim
                              
                          def init_weights_from_pytorch(self, state):
                              slices=get_slices(self.hidden_dim)
                              print (slices)
                      
                              #Event (x) Weights and Biases for all gates
                              
                              lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
                              self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy()  # shape  [h, x]
                              self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy()  # shape  [h, x]
                              self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy()  # shape  [h, x]
                              self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape  [h, x]
                      
                              
                              lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
                              self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                              
                              
                              lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
                      
                              #Hidden state (h) Weights and Biases for all gates
                              self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, h]
                              self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, h]
                              self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, h]
                              self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
                              
                              
                              lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
                      
                              self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy()  #shape is [h, 1]
                              self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy()  #shape is [h, 1]
                              self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy()  #shape is [h, 1]
                              self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
                          def forward_lstm_pass(self,input_data):
                              h = np.zeros(self.hidden_dim)
                              c = np.zeros(self.hidden_dim)
                              
                              output_list=[]
                              for eventx in input_data:
                                  f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
                                  i =  input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi, 
                                              self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
                                  c = cell_state(f,i)
                                  h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
                                  if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
                                      output_list.append(h)
                              if self.matching_in_out:
                                  return output_list
                              else:
                                  return h
                      
                      
                          
                          
                      class fully_connected_layer:
                          def __init__(self,state, dict_name='fc', ):
                              self.fc_Weight = state[dict_name+'.weight'][0].numpy()
                              self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
                              
                          def forward(self,lstm_output, is_sigmoid=True):
                              res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
                              print (res)
                              if is_sigmoid:
                                  return sigmoid(res)
                              else:
                                  return res
                              
                      
                              
                      class RNN_model_Numpy:
                          def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
                              self.lstm_layers=[]
                              for i in range(0, num_layers):
                                  lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
                                  lstm_layer_obj.init_weights_from_pytorch(state) 
                                  self.lstm_layers.append(lstm_layer_obj)
                              
                              self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
                              
                          def forward(self, feature_list):
                              for x in self.lstm_layers:
                                  lstm_output=x.forward_lstm_pass(feature_list)
                                  feature_list=lstm_output
                                  
                              return self.hidden2out.forward(feature_list, is_sigmoid=False)
                      
                      data = np.array(
                                 [[1,1],
                                  [2,2],
                                  [3,3]])
                      
                      
                      
                      check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
                      check.forward(data)
                      

                      Unpickle instance from Jupyter Notebook in Flask App

                      copy iconCopydownload iconDownload
                      ├── WebApp/
                      │  └── app.py
                      └── Untitled.ipynb
                      
                      from WebApp.app import GensimWord2VecVectorizer
                      GensimWord2VecVectorizer.__module__ = 'app'
                      
                      import sys
                      sys.modules['app'] = sys.modules['WebApp.app']
                      
                      GensimWord2VecVectorizer.__module__ = 'app'
                      
                      import sys
                      app = sys.modules['app'] = type(sys)('app')
                      app.GensimWord2VecVectorizer = GensimWord2VecVectorizer
                      
                      ├── WebApp/
                      │  └── app.py
                      └── Untitled.ipynb
                      
                      from WebApp.app import GensimWord2VecVectorizer
                      GensimWord2VecVectorizer.__module__ = 'app'
                      
                      import sys
                      sys.modules['app'] = sys.modules['WebApp.app']
                      
                      GensimWord2VecVectorizer.__module__ = 'app'
                      
                      import sys
                      app = sys.modules['app'] = type(sys)('app')
                      app.GensimWord2VecVectorizer = GensimWord2VecVectorizer
                      
                      ├── WebApp/
                      │  └── app.py
                      └── Untitled.ipynb
                      
                      from WebApp.app import GensimWord2VecVectorizer
                      GensimWord2VecVectorizer.__module__ = 'app'
                      
                      import sys
                      sys.modules['app'] = sys.modules['WebApp.app']
                      
                      GensimWord2VecVectorizer.__module__ = 'app'
                      
                      import sys
                      app = sys.modules['app'] = type(sys)('app')
                      app.GensimWord2VecVectorizer = GensimWord2VecVectorizer
                      

                      AttributeError: Can't get attribute 'new_block' on &lt;module 'pandas.core.internals.blocks'&gt;

                      copy iconCopydownload iconDownload
                      import numpy as np 
                      import pandas as pd
                      df =pd.DataFrame(np.random.rand(3,6))
                      
                      with open("dump_from_v1.3.4.pickle", "wb") as f: 
                          pickle.dump(df, f) 
                      
                      quit()
                      
                      import pickle
                      
                      with open("dump_from_v1.3.4.pickle", "rb") as f: 
                          df = pickle.load(f) 
                      
                      
                      ---------------------------------------------------------------------------
                      AttributeError                            Traceback (most recent call last)
                      <ipython-input-2-ff5c218eca92> in <module>
                            1 with open("dump_from_v1.3.4.pickle", "rb") as f:
                      ----> 2     df = pickle.load(f)
                            3 
                      
                      AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/blocks.py'>
                      
                      import numpy as np 
                      import pandas as pd
                      df =pd.DataFrame(np.random.rand(3,6))
                      
                      with open("dump_from_v1.3.4.pickle", "wb") as f: 
                          pickle.dump(df, f) 
                      
                      quit()
                      
                      import pickle
                      
                      with open("dump_from_v1.3.4.pickle", "rb") as f: 
                          df = pickle.load(f) 
                      
                      
                      ---------------------------------------------------------------------------
                      AttributeError                            Traceback (most recent call last)
                      <ipython-input-2-ff5c218eca92> in <module>
                            1 with open("dump_from_v1.3.4.pickle", "rb") as f:
                      ----> 2     df = pickle.load(f)
                            3 
                      
                      AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/blocks.py'>
                      

                      What is the proper way to make an object with unpickable fields pickable?

                      copy iconCopydownload iconDownload
                      def is_picklable(obj):
                        try:
                          pickle.dumps(obj)
                      
                        except pickle.PicklingError:
                          return False
                        return True
                      
                      def is_picklable(obj: Any) -> bool:
                          try:
                              pickle.dumps(obj)
                              return True
                          except (pickle.PicklingError, pickle.PickleError, AttributeError, ImportError):
                              # https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled
                              return False
                          except RecursionError:
                              warnings.warn(
                                  f"Could not determine if object of type {type(obj)!r} is picklable"
                                  "due to a RecursionError that was supressed. "
                                  "Setting a higher recursion limit MAY allow this object to be pickled"
                              )
                              return False
                          except Exception as e:
                              # https://docs.python.org/3/library/pickle.html#id9
                              warnings.warn(
                                  f"An error occurred while attempting to pickle"
                                  f"object of type {type(obj)!r}. Assuming it's unpicklable. The exception was {e}"
                              )
                              return False
                      
                      class Unpicklable:
                          """
                          A simple marker class so we can distinguish when a deserialized object
                          is a string because it was originally unpicklable 
                          (and not simply a string to begin with)
                          """
                          def __init__(self, obj_str: str):
                              self.obj_str = obj_str
                      
                          def __str__(self):
                              return self.obj_str
                      
                          def __repr__(self):
                              return f'Unpicklable(obj_str={self.obj_str!r})'
                      
                      
                      class PicklableNamespace(Namespace):
                          def __getstate__(self):
                              """For serialization"""
                      
                              # always make a copy so you don't accidentally modify state
                              state = self.__dict__.copy()
                      
                              # Any unpicklables will be converted to a ``Unpicklable`` object 
                              # with its str format stored in the object
                              for key, val in state.items():
                                  if not is_picklable(val):
                                      state[key] = Unpicklable(str(val))
                              return state
                          def __setstate__(self, state):
                              self.__dict__.update(state)  # or leave unimplemented
                      
                      # Normally file handles are not picklable
                      p = PicklableNamespace(f=open('test.txt'))
                      
                      data = pickle.dumps(p)
                      del p
                      
                      loaded_p = pickle.loads(data)
                      # PicklableNamespace(f=Unpicklable(obj_str="<_io.TextIOWrapper name='test.txt' mode='r' encoding='cp1252'>"))
                      
                      def is_picklable(obj: Any) -> bool:
                          try:
                              pickle.dumps(obj)
                              return True
                          except (pickle.PicklingError, pickle.PickleError, AttributeError, ImportError):
                              # https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled
                              return False
                          except RecursionError:
                              warnings.warn(
                                  f"Could not determine if object of type {type(obj)!r} is picklable"
                                  "due to a RecursionError that was supressed. "
                                  "Setting a higher recursion limit MAY allow this object to be pickled"
                              )
                              return False
                          except Exception as e:
                              # https://docs.python.org/3/library/pickle.html#id9
                              warnings.warn(
                                  f"An error occurred while attempting to pickle"
                                  f"object of type {type(obj)!r}. Assuming it's unpicklable. The exception was {e}"
                              )
                              return False
                      
                      class Unpicklable:
                          """
                          A simple marker class so we can distinguish when a deserialized object
                          is a string because it was originally unpicklable 
                          (and not simply a string to begin with)
                          """
                          def __init__(self, obj_str: str):
                              self.obj_str = obj_str
                      
                          def __str__(self):
                              return self.obj_str
                      
                          def __repr__(self):
                              return f'Unpicklable(obj_str={self.obj_str!r})'
                      
                      
                      class PicklableNamespace(Namespace):
                          def __getstate__(self):
                              """For serialization"""
                      
                              # always make a copy so you don't accidentally modify state
                              state = self.__dict__.copy()
                      
                              # Any unpicklables will be converted to a ``Unpicklable`` object 
                              # with its str format stored in the object
                              for key, val in state.items():
                                  if not is_picklable(val):
                                      state[key] = Unpicklable(str(val))
                              return state
                          def __setstate__(self, state):
                              self.__dict__.update(state)  # or leave unimplemented
                      
                      # Normally file handles are not picklable
                      p = PicklableNamespace(f=open('test.txt'))
                      
                      data = pickle.dumps(p)
                      del p
                      
                      loaded_p = pickle.loads(data)
                      # PicklableNamespace(f=Unpicklable(obj_str="<_io.TextIOWrapper name='test.txt' mode='r' encoding='cp1252'>"))
                      
                      def is_picklable(obj: Any) -> bool:
                          try:
                              pickle.dumps(obj)
                              return True
                          except (pickle.PicklingError, pickle.PickleError, AttributeError, ImportError):
                              # https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled
                              return False
                          except RecursionError:
                              warnings.warn(
                                  f"Could not determine if object of type {type(obj)!r} is picklable"
                                  "due to a RecursionError that was supressed. "
                                  "Setting a higher recursion limit MAY allow this object to be pickled"
                              )
                              return False
                          except Exception as e:
                              # https://docs.python.org/3/library/pickle.html#id9
                              warnings.warn(
                                  f"An error occurred while attempting to pickle"
                                  f"object of type {type(obj)!r}. Assuming it's unpicklable. The exception was {e}"
                              )
                              return False
                      
                      class Unpicklable:
                          """
                          A simple marker class so we can distinguish when a deserialized object
                          is a string because it was originally unpicklable 
                          (and not simply a string to begin with)
                          """
                          def __init__(self, obj_str: str):
                              self.obj_str = obj_str
                      
                          def __str__(self):
                              return self.obj_str
                      
                          def __repr__(self):
                              return f'Unpicklable(obj_str={self.obj_str!r})'
                      
                      
                      class PicklableNamespace(Namespace):
                          def __getstate__(self):
                              """For serialization"""
                      
                              # always make a copy so you don't accidentally modify state
                              state = self.__dict__.copy()
                      
                              # Any unpicklables will be converted to a ``Unpicklable`` object 
                              # with its str format stored in the object
                              for key, val in state.items():
                                  if not is_picklable(val):
                                      state[key] = Unpicklable(str(val))
                              return state
                          def __setstate__(self, state):
                              self.__dict__.update(state)  # or leave unimplemented
                      
                      # Normally file handles are not picklable
                      p = PicklableNamespace(f=open('test.txt'))
                      
                      data = pickle.dumps(p)
                      del p
                      
                      loaded_p = pickle.loads(data)
                      # PicklableNamespace(f=Unpicklable(obj_str="<_io.TextIOWrapper name='test.txt' mode='r' encoding='cp1252'>"))
                      
                      def is_picklable(obj: Any) -> bool:
                          """
                          Checks if somehting is pickable.
                      
                          Ref:
                              - https://stackoverflow.com/questions/70128335/what-is-the-proper-way-to-make-an-object-with-unpickable-fields-pickable
                              - pycharm halting all the time issue: https://stackoverflow.com/questions/70761481/how-to-stop-pycharms-break-stop-halt-feature-on-handled-exceptions-i-e-only-b
                          """
                          import pickle
                          try:
                              pickle.dumps(obj)
                          except:
                              return False
                          return True
                      

                      Anyway to pass string containing compiled code instead of file path to ctypes.CDLL?

                      copy iconCopydownload iconDownload
                      from ctypes import *
                      
                      # int add(int x, int y)
                      # {
                      #   return (x+y);
                      # }
                      code = b'\x55\x48\x89\xe5\x89\x7d\xfc\x89\x75\xf8\x8b\x55\xfc\x8b\x45' \
                             b'\xf8\x01\xd0\x5d\xc3'
                      
                      copy = create_string_buffer(code)
                      address = addressof(copy)
                      aligned = address & ~0xfff
                      size = 0x2000
                      prototype = CFUNCTYPE(c_int, c_int, c_int)
                      add = prototype(address)
                      pythonapi.mprotect(c_void_p(aligned), size, 7)
                      print(add(20, 30))
                      

                      python pandas how to read csv file by block

                      copy iconCopydownload iconDownload
                      data = pd.read_csv('data.csv', header=None)
                      dfs = []
                      for _, df in data.groupby(data[0].eq('No.').cumsum()):
                          df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0])
                          dfs.append(df.rename_axis(columns=None))
                      
                      # First block
                      >>> dfs[0]
                          No.              time 00:00:00 00:00:01 00:00:02 00:00:03 00:00:04 00:00:05 00:00:06 00:00:07 00:00:08 00:00:09 00:00:0A  ...
                      0     1  2021/09/12 02:16      235      610      345      997      446      130      129       94      555      274        4  NaN
                      1     2  2021/09/12 02:17      364      210      371      341      294       87      179      106      425      262        3  NaN
                      2  1434  2021/09/12 02:28      269      135      372      262      307       73       86       93      512      283        4  NaN
                      3  1435  2021/09/12 02:29      281      207      688      322      233       75       69       85      663      276        2  NaN
                      
                      
                      # Second block
                      >>> dfs[1]
                          No.              time 00:00:10 00:00:11 00:00:12 00:00:13 00:00:14 00:00:15 00:00:16 00:00:17 00:00:18 00:00:19 00:00:1A  ...
                      0     1  2021/09/12 02:16      255      619      200      100      453      456        4       19       56       23        4  NaN
                      1     2  2021/09/12 02:17      368       21       37       31       24        8       19     1006     4205     2062       30  NaN
                      2  1434  2021/09/12 02:28     2689     1835     3782     2682      307      743      256      741       52       23        6  NaN
                      3  1435  2021/09/12 02:29     2281     2047     6848     3522     2353      755      659      885     6863       26       36  NaN
                      
                      data = pd.read_csv('data.csv', header=None)
                      dfs = []
                      for _, df in data.groupby(data[0].eq('No.').cumsum()):
                          df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0])
                          dfs.append(df.rename_axis(columns=None))
                      
                      # First block
                      >>> dfs[0]
                          No.              time 00:00:00 00:00:01 00:00:02 00:00:03 00:00:04 00:00:05 00:00:06 00:00:07 00:00:08 00:00:09 00:00:0A  ...
                      0     1  2021/09/12 02:16      235      610      345      997      446      130      129       94      555      274        4  NaN
                      1     2  2021/09/12 02:17      364      210      371      341      294       87      179      106      425      262        3  NaN
                      2  1434  2021/09/12 02:28      269      135      372      262      307       73       86       93      512      283        4  NaN
                      3  1435  2021/09/12 02:29      281      207      688      322      233       75       69       85      663      276        2  NaN
                      
                      
                      # Second block
                      >>> dfs[1]
                          No.              time 00:00:10 00:00:11 00:00:12 00:00:13 00:00:14 00:00:15 00:00:16 00:00:17 00:00:18 00:00:19 00:00:1A  ...
                      0     1  2021/09/12 02:16      255      619      200      100      453      456        4       19       56       23        4  NaN
                      1     2  2021/09/12 02:17      368       21       37       31       24        8       19     1006     4205     2062       30  NaN
                      2  1434  2021/09/12 02:28     2689     1835     3782     2682      307      743      256      741       52       23        6  NaN
                      3  1435  2021/09/12 02:29     2281     2047     6848     3522     2353      755      659      885     6863       26       36  NaN
                      
                      def run(sock, delay, zipobj):
                         zf = zipfile.ZipFile(zipobj)
                         for f in zf.namelist():
                            print("using zip :", zf.filename)
                            str = f
                            myobject = re.search(r'(^[a-zA-Z]{4})_.*', str)
                            Objects = myobject.group(1)
                            if Objects  == 'LDEV':
                               metric = re.search('.*LDEV_(.*)/.*', str)
                               metric = metric.group(1)
                            elif Objects  == 'Port':
                               metric = re.search('.*/(Port_.*).csv', str)
                               metric = metric.group(1)
                            else:
                               print("None")
                            print("using csv : ", f)
                            #df = pd.read_csv(zf.open(f), skiprows=[0,1,2,3,4,5])
                            data = pd.read_csv(zf.open(f), header=None, skiprows=[0,1,2,3,4,5])
                            dfs = []
                            for _, df in data.groupby(data[0].eq('No.').cumsum()):
                               df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0])
                               dfs.append(df.rename_axis(columns=None))
                               print("here")
                               date_pattern='%Y/%m/%d %H:%M'
                               df['epoch'] = df.apply(lambda row: int(time.mktime(time.strptime(row.time,date_pattern))), axis=1) # create epoch as a column
                               tuples=[] # data will be saved in a list
                               #formated_str='perf.type.serial.object.00.00.00.TOTAL_IOPS'
                               for each_column in list(df.columns)[2:-1]:
                                      for e in zip(list(df['epoch']),list(df[each_column])):
                                          each_column=each_column.replace("X", '')
                                          tuples.append((f"perf.type.serial.{Objects}.{each_column}.{metric}",e))
                            package = pickle.dumps(tuples, 1)
                            size = struct.pack('!L', len(package))
                            sock.sendall(size)
                            sock.sendall(package)
                            time.sleep(delay)
                      

                      binary deserialization to another type with FSPickler

                      copy iconCopydownload iconDownload
                      let converter =
                          {
                              new ITypeNameConverter with
                                  member this.OfSerializedType(typeInfo) =
                                      { typeInfo with Name = "dummy" }   // always use dummy type name
                                  member this.ToDeserializedType(typeInfo) =
                                      typeInfo
                          }
                      
                      type T = 
                          {
                              a: int
                              b: string
                          }
                      
                      let tArray =
                          [|
                              { a = 1; b = "one" }
                              { a = 2; b = "two" }
                              { a = 3; b = "three" }
                          |]
                      
                      let serializer = FsPickler.CreateBinarySerializer(typeConverter = converter)
                      let bytes = serializer.Pickle(tArray)
                      
                      type U =
                          {
                              a: int
                              b: string
                          }
                      
                      let uArray = serializer.UnPickle<U[]>(bytes)
                      
                      let converter =
                          {
                              new ITypeNameConverter with
                                  member this.OfSerializedType(typeInfo) =
                                      { typeInfo with Name = "dummy" }   // always use dummy type name
                                  member this.ToDeserializedType(typeInfo) =
                                      typeInfo
                          }
                      
                      type T = 
                          {
                              a: int
                              b: string
                          }
                      
                      let tArray =
                          [|
                              { a = 1; b = "one" }
                              { a = 2; b = "two" }
                              { a = 3; b = "three" }
                          |]
                      
                      let serializer = FsPickler.CreateBinarySerializer(typeConverter = converter)
                      let bytes = serializer.Pickle(tArray)
                      
                      type U =
                          {
                              a: int
                              b: string
                          }
                      
                      let uArray = serializer.UnPickle<U[]>(bytes)
                      
                      let converter =
                          {
                              new ITypeNameConverter with
                                  member this.OfSerializedType(typeInfo) =
                                      { typeInfo with Name = "dummy" }   // always use dummy type name
                                  member this.ToDeserializedType(typeInfo) =
                                      typeInfo
                          }
                      
                      type T = 
                          {
                              a: int
                              b: string
                          }
                      
                      let tArray =
                          [|
                              { a = 1; b = "one" }
                              { a = 2; b = "two" }
                              { a = 3; b = "three" }
                          |]
                      
                      let serializer = FsPickler.CreateBinarySerializer(typeConverter = converter)
                      let bytes = serializer.Pickle(tArray)
                      
                      type U =
                          {
                              a: int
                              b: string
                          }
                      
                      let uArray = serializer.UnPickle<U[]>(bytes)
                      

                      Jupyter shell commands in a function

                      copy iconCopydownload iconDownload
                      def foo(astr):
                          !ls $astr
                      
                      foo('*.py')
                      
                      !ls *.py
                      
                      def foo(astr):
                          !ls $astr
                      
                      foo('*.py')
                      
                      !ls *.py
                      
                      from IPython import get_ipython
                      ipython = get_ipython()
                      
                      code = ipython.transform_cell('!ls')
                      print(code)
                      
                      exec(code)
                      
                      exec(ipython.transform_cell('!ls'))
                      
                      from IPython import get_ipython
                      ipython = get_ipython()
                      
                      code = ipython.transform_cell('!ls')
                      print(code)
                      
                      exec(code)
                      
                      exec(ipython.transform_cell('!ls'))
                      
                      from IPython import get_ipython
                      ipython = get_ipython()
                      
                      code = ipython.transform_cell('!ls')
                      print(code)
                      
                      exec(code)
                      
                      exec(ipython.transform_cell('!ls'))
                      

                      Community Discussions

                      Trending Discussions on pickle
                      • Pickle and Numpy versions
                      • TorchText Vocab TypeError: Vocab.__init__() got an unexpected keyword argument 'min_freq'
                      • Using RNN Trained Model without pytorch installed
                      • Unpickle instance from Jupyter Notebook in Flask App
                      • AttributeError: Can't get attribute 'new_block' on &lt;module 'pandas.core.internals.blocks'&gt;
                      • What is the proper way to make an object with unpickable fields pickable?
                      • Anyway to pass string containing compiled code instead of file path to ctypes.CDLL?
                      • python pandas how to read csv file by block
                      • binary deserialization to another type with FSPickler
                      • Jupyter shell commands in a function
                      Trending Discussions on pickle

                      QUESTION

                      Pickle and Numpy versions

                      Asked 2022-Apr-08 at 08:24

                      I have some old sklearn models which I can't retrain. They were pickled long time ago with unclear versions. I can open them with Python 3.6 and Numpy 1.14. But when I try to move to Python 3.8 with Numpy 1.18, I get a segfault on loading them.

                      I tried dumping them with protocol 4 from Python 3.6, it didn't help.

                      Saving:

                      with open('model.pkl', 'wb') as fid:
                          pickle.dump(model, fid, protocol=4)
                      

                      Loading:

                      model = pickle.load(open('model.pkl', "rb"))
                      

                      Is there anything I can do in such situation?

                      ANSWER

                      Answered 2022-Apr-08 at 08:24

                      What worked for me (very task-specific but maybe will help someone):

                      Old dependencies:

                      import joblib
                      model = pickle.load(open('model.pkl', "rb"), encoding="latin1")
                      joblib.dump(model.tree_.get_arrays()[0], "training_data.pkl")
                      

                      Newer dependencies:

                      import joblib
                      from sklearn.neighbors import KernelDensity
                      
                      data = joblib.load("training_data.pkl")
                      kde = KernelDensity(
                            algorithm="auto",
                            atol=0,
                            bandwidth=0.5,
                            breadth_first=True,
                            kernel="gaussian",
                            leaf_size=40,
                            metric="euclidean",
                            metric_params=None,
                            rtol=0
                      ).fit(data)
                      
                      with open("new_model.pkl", "wb") as f:
                          pickle.dump(kde, f)
                      

                      Source https://stackoverflow.com/questions/71780028

                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                      Vulnerabilities

                      No vulnerabilities reported

                      Install pickle

                      You can download it from GitHub.
                      PHP requires the Visual C runtime (CRT). The Microsoft Visual C++ Redistributable for Visual Studio 2019 is suitable for all these PHP versions, see visualstudio.microsoft.com. You MUST download the x86 CRT for PHP x86 builds and the x64 CRT for PHP x64 builds. The CRT installer supports the /quiet and /norestart command-line switches, so you can also script it.

                      Support

                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

                      DOWNLOAD this Library from

                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      Explore Related Topics

                      Share this Page

                      share link
                      Consider Popular Build Tool Libraries
                      Try Top Libraries by FriendsOfPHP
                      Compare Build Tool Libraries with Highest Support
                      Compare Build Tool Libraries with Highest Quality
                      Compare Build Tool Libraries with Highest Security
                      Compare Build Tool Libraries with Permissive License
                      Compare Build Tool Libraries with Highest Reuse
                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      • © 2022 Open Weaver Inc.