lal | 🔥 Golang audio/video live streaming library | Video Utils library
kandi X-RAY | lal Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
lal Key Features
lal Examples and Code Snippets
Trending Discussions on lal
Trending Discussions on lal
QUESTION
I have a df with data about matches between teams and I want to make a new column that has the h2h record between teams prior to a match.
For example:
df = pd.DataFrame(data = [['LAC','LAL', 1, '15/02/2022'], ['LAC','LAL', 1, '16/02/2022'], ['LAL','LAC', 1, '17/02/2022'],
['LAL','LAC', 1, '18/02/2022'], ['LAL','LAC', 1, '19/02/2022'], ['LAC','LAL', 1, '20/02/2022'],
['LAL','LAC', 1, '21/02/2022'], ['LAC','LAL', 1, '22/02/2022']],
columns = ['winner', 'loser', 'won', 'date'])
In this example the head to head prior to each match should be: 0-0, 1-0, 2-0, 1-2, 2-2, 3-3, 3-4
I want to calculate h2h % wins, but I guess getting the number of wins of one team vs the other is the first step. I can calculate the final h2h with a groupby but I'm not sure how to approach calculating per match given that a team might either be in one of the two columns. To note, the format of this df follows a winner/loser format so 'won' is always 1. Alternatively, I can change the df to a long version (one match = two rows) but not sure if that helps. I have other columns as well but I'm not sure if they are relevant for this question (more stats, ids' etc).
Based on @quasi-human reply, I can do the following:
df['winner_wins'] = df.groupby(['winner', 'loser'])['won'].cumsum()
df['winner_wins'] = df.groupby(['winner', 'loser'])['winner_wins'].shift(1)
to get an accurate record of number of wins of the 'winner' team prior to a match. But I don't know how should I approach getting the same for the 'loser' team
ANSWER
Answered 2022-Feb-26 at 01:43If I understand your question correctly, cumsum
and expanding
methods might be useful for you.
import pandas as pd
# Create a sample dataframe
df = pd.DataFrame(data = [['LAC','LAL', 1, '15/02/2022'], ['LAC','LAL', 1, '16/02/2022'], ['LAL','LAC', 1, '17/02/2022'], ['LAL','LAC', 1, '18/02/2022'], ['LAL','LAC', 1, '19/02/2022'], ['LAC','LAL', 1, '20/02/2022'], ['LAL','LAC', 1, '21/02/2022'], ['LAC','LAL', 1, '22/02/2022']], columns = ['winner', 'loser', 'won', 'date'])
# Calculate h2h records
df = df.sort_values('date').assign(
LAC_h2h_wins=(df.winner=='LAC').cumsum(),
LAL_h2h_wins=(df.winner=='LAL').cumsum(),
LAC_h2h_wins_pct=(df.winner=='LAC').expanding().agg(lambda s: 100 * s.sum() / len(s)),
LAL_h2h_wins_pct=(df.winner=='LAL').expanding().agg(lambda s: 100 * s.sum() / len(s)),
)
print(df)
Answer to the OP's comment.
Code:import pandas as pd
# Create a sample dataframe with more data points
df = pd.DataFrame(data = [['LAC','LAL', 1, '15/02/2022'], ['LAC','LAL', 1, '16/02/2022'], ['LAL','LAC', 1, '17/02/2022'], ['LAL','LAC', 1, '18/02/2022'], ['LAL','LAC', 1, '19/02/2022'], ['LAC','LAL', 1, '20/02/2022'], ['LAL','LAC', 1, '21/02/2022'], ['LAC','LAL', 1, '22/02/2022'], ['ABC','LAL', 1, '15/02/2022'], ['ABC','LAL', 1, '16/02/2022'], ['LAL','ABC', 1, '17/02/2022'], ['LAL','ABC', 1, '18/02/2022'], ['LAL','ABC', 1, '19/02/2022'], ['ABC','LAL', 1, '20/02/2022'], ['LAL','ABC', 1, '21/02/2022'], ['ABC','LAL', 1, '22/02/2022'], ['ABC','XYZ', 1, '15/02/2022'], ['ABC','XYZ', 1, '16/02/2022'], ['XYZ','ABC', 1, '17/02/2022'], ['XYZ','ABC', 1, '18/02/2022'], ['XYZ','ABC', 1, '19/02/2022'], ['ABC','XYZ', 1, '20/02/2022'], ['XYZ','ABC', 1, '21/02/2022'], ['ABC','XYZ', 1, '22/02/2022'], ['LAC','XYZ', 1, '15/02/2022'], ['LAC','XYZ', 1, '16/02/2022'], ['XYZ','LAC', 1, '17/02/2022'], ['XYZ','LAC', 1, '18/02/2022'], ['XYZ','LAC', 1, '19/02/2022'], ['LAC','XYZ', 1, '20/02/2022'], ['XYZ','LAC', 1, '21/02/2022'], ['LAC','XYZ', 1, '22/02/2022']], columns = ['winner', 'loser', 'won', 'date'])
# In order to group by games, make sorted game titles like "LAC-LAL"
df['game'] = df.apply(lambda r: '-'.join(sorted([r.winner, r.loser])), axis=1)
# Ensure that df is sorted game and date (date must align in the ascending order)
df = df.sort_values(['game', 'date'], ignore_index=True)
# Assign 1 if the left team in the game title, otherwise 0. For example, "LAC" is the left team in the game title "LAC-LAL"
df['left_win'] = df.apply(lambda r: f'{r.winner}-{r.loser}'==r.game, axis=1)
# Do the same thing on the right team.
df['right_win'] = ~df.left_win
# Calculate the cumulative sumation.
df[['left_win_cumsum', 'right_win_cumsum']] = df.groupby('game')[['left_win', 'right_win']].cumsum()
# Shift and fill the first games as 0
df[['h2h_winner', 'h2h_loser']] = df.groupby('game')[['left_win_cumsum', 'right_win_cumsum']].shift().fillna(0).astype(int)
# Check the order in a pair of winner and loser columns. If the order is different from the game title, reverse the cumsum values
f = lambda r: [r.h2h_winner, r.h2h_loser] if f'{r.winner}-{r.loser}'==r.game else [r.h2h_loser, r.h2h_winner]
df[['h2h_winner', 'h2h_loser']] = df.apply(f, axis=1).apply(pd.Series)
# Drop all the temporary columns
df = df.drop(['game', 'left_win', 'right_win', 'left_win_cumsum', 'right_win_cumsum'], axis=1)
print(df.to_markdown(stralign='center', numalign='center'))
QUESTION
I am trying to build a simple GET API that will fetch data from an API and enable me to fetch it from my frontend (Javascript) however the problem that I am facing is when returning JsonConvert.SerializeObject
it weirdly escapes an array that's stored in the database and is becoming a nightmare to parse in the frontend:
Code:
public string Get()
{
sqlQuery =
"SELECT TOP 60 * FROM tb_HandoverDetails ORDER BY SubmittedDateTimeUTC DESC";
SqlDataAdapter da = new SqlDataAdapter(sqlQuery, conn);
DataTable dt = new DataTable();
da.Fill (dt);
if (dt.Rows.Count > 0)
{
return JsonConvert.SerializeObject(dt);
}
else
{
Response.StatusCode = 400;
return "no data found";
}
}
Result:
[
{
"ID": 8,
"Submitter": "auth.user",
"SubmittedDateTimeUTC": "2021-10-05T20:29:13",
"ExcelTableOne": "\"[{\\\"caseID\\\":[\\\"123\\\",\\\"1234\\\",\\\"12345\\\",\\\"123456\\\",\\\"1234567\\\",\\\"12345678\\\",\\\"123456789\\\"]},{\\\"owner\\\":[]},{\\\"assignee\\\":[]},{\\\"comments\\\":[]}]\""
},
]
ExcelTableOne
is the array that's weirdly escaped.
ExcelTableOne data inside Database:
"[{\"caseID\":[\"123\",\"1234\",\"12345\",\"123456\",\"1234567\",\"12345678\",\"123456789\"]},{\"owner\":[\"Ayush Lal\"]},{\"assignee\":[]},{\"comments\":[]}]"
Any ideas?
TIA
ANSWER
Answered 2021-Oct-05 at 14:08You serialized again ExcelTableOne that was serialized already
To fix, try this
var resultStr= Get();
var resultPrev= JsonConvert.DeserializeObject>(resultStr);
var result = resultPrev.Select(p => new Submiter {Id=p.Id, Submitter=p.Submitter, SubmittedDateTimeUtc=p.SubmittedDateTimeUtc}).ToList();
for (int i = 0; i < result.Count; i++)
{
var excelTableStr = JsonConvert.DeserializeObject(resultPrev[i].ExcelTableOneStr);
result[i].ExcelTableOne= JsonConvert.DeserializeObject>(excelTableStr);
}
classes
public partial class SubmiterStr
{
[JsonProperty("ID")]
public long Id { get; set; }
[JsonProperty("Submitter")]
public string Submitter { get; set; }
[JsonProperty("SubmittedDateTimeUTC")]
public DateTimeOffset SubmittedDateTimeUtc { get; set; }
[JsonProperty("ExcelTableOne")]
public string ExcelTableOneStr { get; set; }
}
public partial class Submiter
{
[JsonProperty("ID")]
public long Id { get; set; }
[JsonProperty("Submitter")]
public string Submitter { get; set; }
[JsonProperty("SubmittedDateTimeUTC")]
public DateTimeOffset SubmittedDateTimeUtc { get; set; }
[JsonProperty("ExcelTableOne")]
public List ExcelTableOne { get; set; }
}
public partial class ExcelTableOne
{
[JsonProperty("caseID", NullValueHandling = NullValueHandling.Ignore)]
//[JsonConverter(typeof(DecodeArrayConverter))]
public long[] CaseId { get; set; }
[JsonProperty("owner", NullValueHandling = NullValueHandling.Ignore)]
public object[] Owner { get; set; }
[JsonProperty("assignee", NullValueHandling = NullValueHandling.Ignore)]
public object[] Assignee { get; set; }
[JsonProperty("comments", NullValueHandling = NullValueHandling.Ignore)]
public object[] Comments { get; set; }
}
result
[
{
"ID": 8,
"Submitter": "auth.user",
"SubmittedDateTimeUTC": "2021-10-05T20:29:13-02:30",
"ExcelTableOne": [
{
"caseID": [
123,
134,
12345,
123456,
1234567,
12345678,
123456789
]
},
{
"owner": []
},
{
"assignee": []
},
{
"comments": []
}
]
}
]
QUESTION
I have this sample data set
City LAL NYK Dallas Detroit SF Chicago Denver Phoenix TorontoAnd what I want to do is update certain values with specific values, and the rest of it I would leave as it is.
So, with SQL I would do something like this:
update table1
set city = case
when city='LAL' then 'Los Angeles'
when city='NYK' then 'New York'
Else city
end
What would be the best way to do this in Pandas?
ANSWER
Answered 2022-Jan-29 at 13:48You can directly replace the values like this:
replacement_dict = {"LAL": "Los Angeles", "NYK": "New York"}
for key, value in replacement_dict.items():
df['City'][df['City'] == key] = value
QUESTION
I have created an app and stored some data in firebase database. I want to show the data in recyclerview but all the data is not populated i.e. Data of 4 fields is displaying but data of two fields (bloodgroup and phone no) is not displayed by the recyclerview. Below is my code. Thanks in advance
My Adapter RcvAdapter
public class RcvAdapter extends FirebaseRecyclerAdapter {
public RcvAdapter(@NonNull @NotNull FirebaseRecyclerOptions options) {
super(options);
}
@Override
protected void onBindViewHolder(@NonNull @NotNull MyViewHolder holder, int position, @NonNull @NotNull ModelClass model) {
holder.uAddress.setText(model.getAddress());
holder.uBGroup.setText(model.getBloodGroup());
holder.uCity.setText(model.getCity());
holder.uDistrict.setText(model.getDistrict());
holder.uName.setText(model.getName());
holder.uPhone.setText(model.getPhoneNumber());
holder.uContact.setText(model.getContact());
}
@NonNull
@NotNull
@Override
public MyViewHolder onCreateViewHolder(@NonNull @NotNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.dashboard_rcv_layout, parent, false);
return new MyViewHolder(view);
}
class MyViewHolder extends RecyclerView.ViewHolder{
TextView uAddress;
TextView uBGroup;
TextView uCity;
TextView uDistrict;
TextView uName;
TextView uPhone;
TextView uContact;
public MyViewHolder(@NonNull @NotNull View itemView) {
super(itemView);
uAddress = (TextView) itemView.findViewById(R.id.rcv_address);
uBGroup =(TextView) itemView.findViewById(R.id.rcv_bloodGroup);
uCity =(TextView) itemView.findViewById(R.id.rcv_city);
uDistrict =(TextView) itemView.findViewById(R.id.rcv_district);
uName =(TextView) itemView.findViewById(R.id.rcv_name);
uPhone = (TextView) itemView.findViewById(R.id.rcv_phoneNumber);
uContact =(TextView) itemView.findViewById(R.id.rcv_contact);
}
}
Recyclerview single row layout
My MainActivity
public class DashboardActivity extends AppCompatActivity {
RecyclerView recyclerView;
RcvAdapter adapter;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_dashboard);
recyclerView = findViewById(R.id.recyclerView);
recyclerView.setLayoutManager(new LinearLayoutManager(this));
// database = FirebaseDatabase.getInstance().getReference().child("Users");
// Retrive firebase data into recyclerview
FirebaseRecyclerOptions options =
new FirebaseRecyclerOptions.Builder()
.setQuery(FirebaseDatabase.getInstance().getReference().child("Users"), ModelClass.class)
.build();
adapter = new RcvAdapter(options);
recyclerView.setAdapter(adapter);
}
@Override
protected void onStart() {
super.onStart();
adapter.startListening();
}
@Override
protected void onStop() {
super.onStop();
adapter.stopListening();
}
ModelClass
public class ModelClass {
String address;
String bloodGroup;
String city;
String district;
String name;
String phoneNumber;
String contact;
public ModelClass() {
}
public ModelClass(String address, String bloodGroup, String city, String district, String name, String phoneNumber, String contact) {
this.address = address;
this.bloodGroup = bloodGroup;
this.city = city;
this.district = district;
this.name = name;
this.phoneNumber = phoneNumber;
this.contact = contact;
}
public String getAddress() {
return address;
}
public void setAddress(String address) {
this.address = address;
}
public String getBloodGroup() {
return bloodGroup;
}
public void setBloodGroup(String bloodGroup) {
this.bloodGroup = bloodGroup;
}
public String getCity() {
return city;
}
public void setCity(String city) {
this.city = city;
}
public String getDistrict() {
return district;
}
public void setDistrict(String district) {
this.district = district;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getPhoneNumber() {
return phoneNumber;
}
public void setPhoneNumber(String phoneNumber) {
this.phoneNumber = phoneNumber;
}
public String getContact() {
return contact;
}
public void setContact(String contact) {
this.contact = contact;
}
}
data of blood group and Phone No is no populating.
DashboardActivity Layout
ANSWER
Answered 2022-Jan-20 at 11:49Since you're using the default encoder/decoder with the FirebaseRecycler class, you should make sure to match the key names in the database with the key names in your model.
For example, Try changing "phoneNumber" in ModelClass
to "phone" (same as database).
QUESTION
I am operating with the multiple column data: ID,num,score
7LMQ,Y6G,1.99
7LAA,Y65,2.95
7LZZ,Y55,8.106
7LDD,YAA,9.063
7N66,0HG,6.042
7444,HOP,5.02
7LJF,HEI,5.14
7LFD,LAL,4.128
7KCV,Cho,4.31
7GHJ,Ro,9.045
using some simple script I need to create two bash arrays from this data:
a simple array containing elemements from the second column:
sdf_lists=("Y6G" "Y65" "Y55" "YAA" "0HG" "HOP" "HEI" "LAL" "Cho" "Ro")
an associative array made from the elements of the 2nd and the 1st columns:
dataset=( [Y6G]=7LMQ [Y65]=7LAA [Y55]=7LZZ [YAA]=7LDD [0HG]=7N66 [HOP]=7444 [HEI]=7LJF [LAL]=7LFD [Cho]=7KCV [Ro]=7GHj ).
Do I need something complex like AWK to achive it or simple GREP solution will work as well?
ANSWER
Answered 2021-Dec-15 at 18:56bash by itself is all you need
declare -a sdf_lists=()
declare -A dataset=()
while IFS=, read -r id sdf value; do
sdf_lists+=("$sdf")
dataset[$id]="$sdf"
done < file.csv
declare -p sdf_lists dataset
result
declare -a sdf_lists=([0]="Y6G" [1]="Y65" [2]="Y55" [3]="YAA" [4]="0HG" [5]="HOP" [6]="HEI" [7]="LAL" [8]="Cho" [9]="Ro")
declare -A dataset=([7LMQ]="Y6G" [7LJF]="HEI" [7444]="HOP" [7GHJ]="Ro" [7KCV]="Cho" [7N66]="0HG" [7LFD]="LAL" [7LZZ]="Y55" [7LDD]="YAA" [7LAA]="Y65" )
To address Andre Wildberg's appropriate concern about CSV data, with bash 5.1, we can do
enable -f /usr/local/lib/bash/csv csv # your location may be different
while IFS= read -r line; do
csv -a fields "$line"
sdf_lists+=("${fields[1]}")
dataset[${fields[0]}]="${fields[1]}"
done < file.csv
Or, use a tool like python or ruby that ship with CSV modules in their standard library.
QUESTION
I have list of strings and have to find specific text from string. example
L1=["Address:S/O: Puran Mal Saini, xxxxxxxxxx,Pxxxxxxxx, Palam Vilxxxxxxiage,Palam",
"Address:S/O Radheyshyam Sharma, E SECOND",
"Address:S/O: Saroj Shahi, gram-shyampur",
"Address:S/O Birjraj Singh, Cccxxxx, NEW Azzzzzzz,",
"Address:208027 S/O: Naresh Chandra Mishra",
"Address: C/O: Mayenk Jain. 260/18, Axxxxxxxr, Opp. Haxxxx xxxxxr, Gxxxxxa",
"Address:208027S/O: Naresh Chandra Mishra,Wxxx, 127/406",
"Address: C/O Sachin Vasant Shivaji Vidhyalay, Sissssss",
"Address S/OGanesh Lal Dev, LOT NO-227, NXXXXXXXXX",
"Address S/O,Ganesh Lal Dev, LOT NO-227, XXXCCCVVVVVVV"]
My desire output from above list
Puran Mal Saini,
Radheyshyam Sharma
Saroj Shahi
Birjraj Singh
Naresh Chandra Mishra
Mayenk Jain
Naresh Chandra Mishra
Sachin Vasant Shivaji Vidhyalay
Ganesh Lal Dev
Ganesh Lal Dev
ANSWER
Answered 2021-Dec-12 at 07:55import re
pattern = re.compile(".+?(?:[CS]\/O).*?([\w ]+).*", re.IGNORECASE)
print([pattern.findall(x)[0].strip() for x in L1])
OUTPUT
['Puran Mal Saini', 'Radheyshyam Sharma', 'Saroj Shahi', 'Birjraj Singh', 'Naresh Chandra Mishra', 'Mayenk Jain', 'Naresh Chandra Mishra', 'Sachin Vasant Shivaji Vidhyalay', 'Ganesh Lal Dev', 'Ganesh Lal Dev']
You can avoid the strip
in your list comprehension if you use:
pattern = re.compile(".+?(?:[CS]\/O).*?(\w[\w ]+).*", re.IGNORECASE)
Anyway - I do agree with one of the comments: your data source looks extremely messed up, and I am not sure if this pattern will work for other lines I have not seen.
QUESTION
This is the sample of my mongodb document( try to use jsonformatter.com to analyse it):
{"_id":"6278686","playerName":"Rohit Lal","tournamentId":"197831","score":[{"_id":"1611380","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":["Mohit Mishra"],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1602732","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1536514","runsScored":1,"ballFaced":3,"fours":0,"sixes":0,"strikeRate":33.33,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"run out Sameer Baveja","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1536474","runsScored":2,"ballFaced":7,"fours":0,"sixes":0,"strikeRate":28.57,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"c Rajesh b Prasad Naik","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1536467","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1500825","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1461428","runsScored":18,"ballFaced":6,"fours":1,"sixes":2,"strikeRate":300,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"not out","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1461408","runsScored":0,"ballFaced":1,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"c Sudhir b Vinay Kasat *vk*","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1451175","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1451146","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1392796","runsScored":0,"ballFaced":1,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"c †Vinay Kedia b Lalit","catches":[],"stumping":[],"runout":[],"participatedRunout":[]}],"__v":0}
I want to sum the length of catches array field of all objects inside score array. I know, I can achieve it using aggregation framework, but I am begineer in mongodb and does not have knowledge of many aggregation operators. Here is the aggregation pipeline I have tried but it returns the number of existence of this field, not the sum of length of this array:
[
"totalCatches": {
$size: "$score.catches"
}
]
ANSWER
Answered 2021-Nov-05 at 00:10$unwind
- Descontructscore
array field to multiple documents.$group
- Group bynull
(for all objects), next$sum
for the$size
ofscore.catches
.
db.collection.aggregate([
{
$unwind: "$score"
},
{
$group: {
_id: null,
"totalCatches": {
$sum: {
$size: "$score.catches"
}
}
}
}
])
Note: If you want the result to be based on each document (not combine all documents), then you need to change the $group
's _id as:
{
$group: {
_id: "$_id",
...
}
}
QUESTION
This is the sample of my mongodb document( try to use jsonformatter.com to analyse it):
{"_id":"6278686","playerName":"Rohit Lal","tournamentId":"197831","score":[{"_id":"1611380","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":["Mohit Mishra"],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1602732","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1536514","runsScored":1,"ballFaced":3,"fours":0,"sixes":0,"strikeRate":33.33,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"run out Sameer Baveja","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1536474","runsScored":2,"ballFaced":7,"fours":0,"sixes":0,"strikeRate":28.57,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"c Rajesh b Prasad Naik","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1536467","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1500825","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1461428","runsScored":18,"ballFaced":6,"fours":1,"sixes":2,"strikeRate":300,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"not out","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1461408","runsScored":0,"ballFaced":1,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"c Sudhir b Vinay Kasat *vk*","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1451175","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1451146","runsScored":0,"ballFaced":0,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"-","catches":[],"stumping":[],"runout":[],"participatedRunout":[]},{"_id":"1392796","runsScored":0,"ballFaced":1,"fours":0,"sixes":0,"strikeRate":0,"oversBowled":0,"runsConceded":0,"economyRate":0,"wickets":0,"maiden":0,"howToOut":"c †Vinay Kedia b Lalit","catches":[],"stumping":[],"runout":[],"participatedRunout":[]}],"__v":0}
I want to sum the runsScored field of all objects inside score array. I know, I can achieve it using aggregation framework, but I am begineer in mongodb and does not have knowledge of many aggregation operators.
ANSWER
Answered 2021-Nov-04 at 15:30To avoid $unwind
if you want to get the total for each document, you can use this aggregation stage:
db.collection.aggregate([
{
"$project": {
"sum": {
"$sum": "$score.runsScored"
}
}
}
])
The trick here is using $score.runsScored
it generates an array with all values, then you only have to $sum
these values.
Example here
The other way is using $unwind
and $group
like this. Note that in this example _id
is null
to sum all values in the collection, to get the total for each document you have to use _id: $_id
like this example
QUESTION
I am trying to make a django website which will display a chart that is updated every 1sec. I dont know why but except for the lines everything else is getting update a snapshot of the graph with the console log. the code below is my jquery in the html page
{% block jquery %}
and this is my apiview
class ChartData(APIView):
authentication_classes = []
permission_classes = []
def get(self, request, format=None):
g_dic = GetGoldRates()
now = datetime.datetime.now()
current_time = now.strftime("%H:%M:%S")
tg = {
'time': current_time,
'b1': float(g_dic['karat24']),
's24K': float(g_dic['fixkarat24']),
}
return Response(tg)
g_dic will usually get a dic like this {'api_rate': '1796.03', 'karat24': '17.501', 'karat22': '16.031', 'karat21': '15.313', 'karat18': '13.126', 'fixkarat24': '17.950', 'fixkarat22': '17.050', 'fixkarat21': '16.300', 'fixkarat18': '14.000'}
Could someone please help and point out my mistake
ANSWER
Answered 2021-Oct-22 at 17:05You are telling chart.js which custom key it needs to look for for the y axis but not for the x axis. So chart.js is looking for the x
key in the object.
Changing your parsing
config to include the xAxisKey: 'time'
will fix your issue.
QUESTION
looking for some help on how to create a game_id variable in SQL. I'm working on an nba project and I can manipulate the data into team, date, opponent format where every team has a row for each game they play. A game_id variable would make my life easier for other work in the project but I don't know how to create it.
The variable itself can start from 1 or 100000, doesn't matter. I just need it to uniquely identify every game that is being played.
Below is an example table + data you can create to see my dilemma. Ideally the LAL and GSW rows would both have the same game_id, and the BKN and MIL rows would have the same game_id.
CREATE TABLE basketball_data (
team text,
dategame date,
opponent text
);
INSERT INTO basketball_data (team, dategame, opponent)
VALUES ('GSW', '2021-10-19', 'LAL');
INSERT INTO basketball_data (team, dategame, opponent)
VALUES ('LAL', '2021-10-19', 'GSW');
INSERT INTO basketball_data (team, dategame, opponent)
VALUES ('BKN', '2021-10-19', 'MIL');
INSERT INTO basketball_data (team, dategame, opponent)
VALUES ('MIL', '2021-10-19', 'BKN');
Anyone have an idea of what would be a way of creating a variable like this? If it makes a difference, I'm working in PostgreSQL. Thanks!
ANSWER
Answered 2021-Oct-07 at 00:31You may try the following using DENSE_RANK as a window function:
Retrieving a game id during queries
SELECT
DENSE_RANK() OVER (
ORDER BY
dategame,(
CASE
WHEN team < opponent THEN CONCAT(team,opponent)
ELSE CONCAT(opponent,team)
END
)
) as game_id,
team,
dategame,
opponent
FROM
basketball_data;
Creating a new table with the same data and game id
CREATE TABLE basketball_data_with_game_id AS
SELECT
DENSE_RANK() OVER (
ORDER BY
dategame,(
CASE
WHEN team < opponent THEN CONCAT(team,opponent)
ELSE CONCAT(opponent,team)
END
)
) as game_id,
team,
dategame,
opponent
FROM
basketball_data;
There are no results to be displayed.
SELECT * FROM basketball_data_with_game_id;
Updating the existing table to have the game id
ALTER TABLE basketball_data
ADD game_id INT DEFAULT 0;
There are no results to be displayed.
UPDATE basketball_data
SET game_id = n.game_id
FROM (
SELECT
DENSE_RANK() OVER (
ORDER BY
dategame,(
CASE
WHEN team < opponent THEN CONCAT(team,opponent)
ELSE CONCAT(opponent,team)
END
)
) as game_id,
team,
dategame,
opponent
FROM
basketball_data
) n
WHERE basketball_data.game_id=0 AND
basketball_data.team=n.team AND
basketball_data.dategame=n.dategame AND
basketball_data.opponent=n.opponent;
There are no results to be displayed.
SELECT * FROM basketball_data;
Let me know if this works for you.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install lal
Prebuilt binaries for Linux, macOS(Darwin), Windows are available in the lal github releases page. Naturally, using the latest release binary is the recommended way. The naming format is lal_<version>_<platform>.zip, e.g. lal_v0.20.0_linux.zip. LAL could also be built from the source wherever the Go compiler toolchain can run, e.g. for other architectures including arm32 and mipsle which have been tested by the community.
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
Save this library and start creating your kit
Share this Page