nosy | classifying text harvested from social media

 by   pascalc Python Version: Current License: No License

kandi X-RAY | nosy Summary

kandi X-RAY | nosy Summary

nosy is a Python library typically used in Telecommunications, Media, Advertising, Marketing applications. nosy has no bugs, it has no vulnerabilities, it has build file available and it has low support. You can download it from GitHub.

A application used to monitor and analyse new social media such as Twitter. Table of contents --- * nosy * nosy - Contains the algorithms and utilites * algorithm * lang.py - Try to determine the language. Currently uses word matching to a english dictionary and counts the number of matches. * persistent_classifier.py - An abstract classifier class that defines logic for saving/loading states to/from redis database. * naive_bayes.py - A persistent classifier implementing/using naive bayes * random_classifier.py - A classifier that emits random classifications, used for testing. * open_struct.py - a class that allows arbitrary fields to be set on it, modelled after Ruby’s OpenStruct class. * mongo_open_struct.py - An OpenStruct that can be persisted to MongoDB, and also loaded by searching. * base.py - Base class for both Classification and Classified Objects. Contains logic for normalizing text before insertion into the database. * model.py - Defines ClassificationObject and ClassifiedObject, which are MongoOpenStructs. ClassificationObjects are stored in the corpus, before classification, and ClassifiedObjects are ClassificationObjects that have been classified. * stream_handler.py - StreamHandler is an abstract class that consumes HTTP streams. TwitterHandler is a StreamHandler that connects to Twitter’s API. * classifier * classify_handler.py - A REST end-point for the classifiers. See ADD for parameters * tweet_classifier.py - Consum JSON from the Twitter stream and saves it in the proper format to the mongoDB. * run_classifier.sh - Basch script to run the tweet_classifier.py with correct arguments. * stream_example.rb - A example demonstrating real-time publishing using Juggernaut library (JavaScript). NOTE: Deprecated. HTML5 offers server sent event which replicates the Juggernaut functionality. * corpus - Logic for interacting with the corpus * corpus_handler.py - A REST end-point for browsing the corpus * tweet_harvester.py - Runs a TwitterHandler and transforms Twitter’s JSON into our classification objects. * flicktweet-scraper - Example tweets about movies from www.flicktweets.com. These were used in our demo session. * move_reviews - Corpus with positivite and negative move reviews. A script inserts these into the corpus as classification objects. Used in our demo session. * setup.py and reinstall - installs the nosy library into Python’s site-packages.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              nosy has a low active ecosystem.
              It has 9 star(s) with 5 fork(s). There are 4 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              nosy has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of nosy is current.

            kandi-Quality Quality

              nosy has 0 bugs and 0 code smells.

            kandi-Security Security

              nosy has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              nosy code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              nosy does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              nosy releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.

            Top functions reviewed by kandi - BETA

            kandi has reviewed nosy and discovered the below as its top functions. This is intended to give you an instant insight into nosy implemented functionality, and help decide if they suit your requirements.
            • Classify a text
            • Load the features
            • Splits the features
            • Classify a feature
            • Train the network
            • Convert a classification object to a bag
            • Main thread
            • Process the text
            • Contract phrases
            • Create a ClassificationObject from json
            • Expand contractions
            • Save the document
            • Create a query to update the object
            • Generate a new unique ID
            • Compute accuracy
            • Saves a classification object
            Get all kandi verified functions for this library.

            nosy Key Features

            No Key Features are available at this moment for nosy.

            nosy Examples and Code Snippets

            No Code Snippets are available at this moment for nosy.

            Community Discussions

            QUESTION

            Problem with manually allocating memory address for a pointer
            Asked 2022-Feb-16 at 16:50

            I am trying to work with flash memory on MPC5748G - a microcontroller from NXP running FreeRTOS 10.0.1, and I get some behaviour that I can't understand.

            I am allocating memory manually, and the assignment seems not to work. However, I can reach the value at the address when using 'printf' - but only from the same function. (I'm using the copy of a pointer, to make sure that some sore of compiler optimisation doesn't take place)

            ...

            ANSWER

            Answered 2022-Feb-16 at 16:50

            The problem was writing to FLASH memory - it hasn't been correctly initialized.

            The proper way to write to flash on MPC5748g using the SDK 3.0.3 is following:

            • save flash controller cache
            • initialise flash
            • check and protect UT block
            • unblock an address space
            • erase a block in this space
            • check if the space block is blank
            • program the block
            • verify if the block is programmed correctly
            • check sum of the programmed data
            • restore flash controller cache

            The strange behaviour of printf and pointer was due to compiler optimization. After changing the compiler flags to -O0 (no optimization), the error was consistent.

            The same consistent error can be achieved when marking the pointers as 'volatile'.

            Source https://stackoverflow.com/questions/71069626

            QUESTION

            How to add a libgcc.a library to my keil project
            Asked 2022-Feb-11 at 18:19

            I need to make a library for a customer who is using GCC. I have a working Keil project compiled with the GCC. The next step is to make a library. I removed from the project main file and Keil generated libname.a library file. Now I want to create a new project with the same main and libname.a library. I'm failing to do so. I added this library to Options/Linker tab, added library path and getting "c:/gnu arm embedded toolchain/10p3_2021_10/bin/../lib/gcc/arm-none-eabi/10.3.1/../../../../arm-none-eabi/bin/ld.exe: cannot find -llibname.a Here is all the options/flags:

            -mcpu=cortex-m4 -mthumb -o ./DebugConfig/name_main.elf -L ./DebugConfig *.o -llibname.a -mcpu=cortex-m4 --specs=nosys.specs -Wl,--gc-sections -static -Wl,--start-group -Wl,--end-group --specs=nano.specs -mthumb -Wl,--start-group -lc -lm -Wl,--end-group

            Thank you in advance.

            ...

            ANSWER

            Answered 2022-Feb-11 at 18:19

            All that was needed is to put a colon in front of the name :libname.a ...

            I hate fighting with tools. Here is the solution: I need to add into the misc section separately -L path\lib\one -l libname Or -l:name.a

            I did not find where it is written that my options are:

            1. -l name and in this case libname.a file will be searched for;
            2. -l:name.a and in this case name.a file is the target library.

            Why does it have to be so convoluted and complicated... This is just a rhetorical question, obviously. I hope this will help someone else in the future.

            Source https://stackoverflow.com/questions/71073373

            QUESTION

            react exclude config files from build
            Asked 2022-Feb-09 at 08:41

            In my react app I have 4 separate config files for 4 different environments to run in, test for unit tests, dev use while developing, staging for the staging environment (almost identical to prod) and prod for the actual customer facing production server.
            One setting of those config files is the api address (127.0.0.1:7777 for dev, api.staging-app.com for staging, api.app.com for prod.

            React will use the config file corresponding to the REACT_APP_NODE_ENV variable. But I have recently noticed that react will always bundle ever config file to be discovered by nosy users.

            Since I would like to keep the location of the staging server secret (its still protected, even if it were discovered), I would like to exclude all the unused config files from the webpack build. Is there a way to do this?

            ...

            ANSWER

            Answered 2022-Feb-09 at 08:41

            I don't know why this works but I did it like this:

            Source https://stackoverflow.com/questions/71008724

            QUESTION

            Android Studio won't open on Windows 11
            Asked 2022-Jan-29 at 16:24

            So I can't get Android Studio to open through the studio64.exe launcher.

            It was working perfectly just a few days ago and now it won't launch. Tried uninstalling and installing the latest Bumblebee 2021.1 with no luck. Tried deleting the Android studio files in AppData but still no luck. Even tried reverting to an older restore point with no success.

            I have already set JDK_HOME and JAVA_HOME to the included Android Studio jre. After doing this the app now launches by running studio.bat but the actual launcher still fails. It just shows up for a brief second in Task Manager then disappears. No error messages or anything, just blank.

            The weird thing is that I also have Intellij IDEA but that launches perfectly fine through it's launcher. Just Android Studio that is giving me issues.

            Any help will be appreciated!

            Edit: Attaching the log from the IDE.

            ...

            ANSWER

            Answered 2022-Jan-29 at 16:24

            Ok, I have figured out the issue. There seems to be an issue with the include JRE version: 11.0.11+9-b60-7590822 (Oracle Corporation) which is causing issues with Java AWT used for the laucher UI. This issue seems to only happen when you have installed the Windows 11 KB5008353 Cumulative update to Build 22000.469. Or maybe the included JRE is broken.

            But the current fix I did was take the contents of C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2021.3.1\jbr which is JRE version: 11.0.13+7-b1751.21 (JetBrains s.r.o.) and place them in C:\Program Files\Android\Android Studio\jre (you should of course rename the old jre folder so you don't loose the original files).

            Doing this will allow you to launch the app through studio64.exe directly like before. Until this is fixed this is the only way to get it working again. You can star the issue I created on Google's issue tracker: https://issuetracker.google.com/issues/216891004

            Source https://stackoverflow.com/questions/70906574

            QUESTION

            Linker - Data constants replaced by garbage
            Asked 2021-Dec-20 at 14:06

            I'm trying to integrate some third-party library to my project. I compile all my files with no problems, but when i try to add custom library, all constants in this library becomes garbage.

            Here is my linker command (split over lines for readability):

            ...

            ANSWER

            Answered 2021-Dec-20 at 14:06

            The 0x00000004 in library is not a real value - it's just an offset which needs to be added to some external symbol. The name of the symbol is stored in so called relocation section and can be displayed by running objdump -Sr.

            The value of the external symbol is unknown until link-time but once the code is fully linked, the offset is replaced with final (so called "absolute") address which happens to be 0x17ffc880 in your case.

            Source https://stackoverflow.com/questions/70421198

            QUESTION

            Unable to retrieve version information from Elasticsearch nodes. Request timed out
            Asked 2021-Dec-05 at 01:46

            I am installing Kibana and elasticsearch version 7.15.1 as per instructions mentioned in the link Install Kibana with Docker

            The commands I am using are

            ...

            ANSWER

            Answered 2021-Dec-03 at 12:50

            Your kibana service is missing information about elasticsearch user/password.

            Few days ago I tryied to create minimalistic swarm stack and this is result:

            docker-compose.yml

            Source https://stackoverflow.com/questions/69791608

            QUESTION

            Elasticsearch service hang and kills while data insertion jvm heap
            Asked 2021-Dec-04 at 14:11

            I am using elasticsearch 5.6.13 version, I need some experts configurations for the elasticsearch. I have 3 nodes in the same system (node1,node2,node3) where node1 is master and else 2 data nodes. I have number of indexes around 40, I created all these indexes with default 5 primary shards and some of them have 2 replicas. What I am facing the issue right now, My data (scraping) is growing day by day and I have 400GB of the data in my one of index. similarly 3 other indexes are also very loaded. From some last days I am facing the issue while insertion of data my elasticsearch hangs and then the service is killed which effect my processing. I have tried several things. I am sharing the system specs and current ES configuration + logs. Please suggest some solution.

            The System Specs: RAM: 160 GB, CPU: AMD EPYC 7702P 64-Core Processor, Drive: 2 TB SSD (The drive in which the ES installed still have 500 GB left)

            ES Configuration JVM options: -Xms26g, -Xmx26g (I just try this but not sure what is the perfect heap size for my scenario) I just edit this above lines and the rest of the file is as defult. I edit this on all three nodes jvm.options files.

            ES LOGS

            [2021-09-22T12:05:17,983][WARN ][o.e.m.j.JvmGcMonitorService] [sashanode1] [gc][170] overhead, spent [7.1s] collecting in the last [7.2s] [2021-09-22T12:05:21,868][WARN ][o.e.m.j.JvmGcMonitorService] [sashanode1] [gc][171] overhead, spent [3.7s] collecting in the last [1.9s] [2021-09-22T12:05:51,190][WARN ][o.e.m.j.JvmGcMonitorService] [sashanode1] [gc][172] overhead, spent [27.7s] collecting in the last [23.3s] [2021-09-22T12:06:54,629][WARN ][o.e.m.j.JvmGcMonitorService] [cluster_name] [gc][173] overhead, spent [57.5s] collecting in the last [1.1m] [2021-09-22T12:06:56,536][WARN ][o.e.m.j.JvmGcMonitorService] [cluster_name] [gc][174] overhead, spent [1.9s] collecting in the last [1.9s] [2021-09-22T12:07:02,176][WARN ][o.e.m.j.JvmGcMonitorService] [cluster_name] [gc][175] overhead, spent [5.4s] collecting in the last [5.6s] [2021-09-22T12:06:56,546][ERROR][o.e.i.e.Engine ] [cluster_name] [index_name][3] merge failed java.lang.OutOfMemoryError: Java heap space

            [2021-09-22T12:06:56,548][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [cluster_name] fatal error in thread [elasticsearch[cluster_name][bulk][T#25]], exiting java.lang.OutOfMemoryError: Java heap space

            Some more logs

            [2021-09-22T12:10:06,526][INFO ][o.e.n.Node ] [cluster_name] initializing ... [2021-09-22T12:10:06,589][INFO ][o.e.e.NodeEnvironment ] [cluster_name] using [1] data paths, mounts [[(D:)]], net usable_space [563.3gb], net total_space [1.7tb], spins? [unknown], types [NTFS] [2021-09-22T12:10:06,589][INFO ][o.e.e.NodeEnvironment ] [cluster_name] heap size [1.9gb], compressed ordinary object pointers [true] [2021-09-22T12:10:07,239][INFO ][o.e.n.Node ] [cluster_name] node name [sashanode1], node ID [2p-ux-OXRKGuxmN0efvF9Q] [2021-09-22T12:10:07,240][INFO ][o.e.n.Node ] [cluster_name] version[5.6.13], pid[57096], build[4d5320b/2018-10-30T19:05:08.237Z], OS[Windows Server 2019/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_261/25.261-b12] [2021-09-22T12:10:07,240][INFO ][o.e.n.Node ] [cluster_name] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=D:\Databases\ES\elastic and kibana 5.6.13\es_node_1, -Des.default.path.logs=D:\Databases\ES\elastic and kibana 5.6.13\es_node_1\logs, -Des.default.path.data=D:\Databases\ES\elastic and kibana 5.6.13\es_node_1\data, -Des.default.path.conf=D:\Databases\ES\elastic and kibana 5.6.13\es_node_1\config, exit, -Xms2048m, -Xmx2048m, -Xss1024k]

            Also in my ES folder there are so many files with the random names (java_pid197036.hprof) Further details can be shared please suggest any further configurations. Thanks

            The output for _cluster/stats?pretty&human is

            { "_nodes": { "total": 3, "successful": 3, "failed": 0 }, "cluster_name": "cluster_name", "timestamp": 1632375228033, "status": "red", "indices": { "count": 42, "shards": { "total": 508, "primaries": 217, "replication": 1.3410138248847927, "index": { "shards": { "min": 2, "max": 60, "avg": 12.095238095238095 }, "primaries": { "min": 1, "max": 20, "avg": 5.166666666666667 }, "replication": { "min": 1.0, "max": 2.0, "avg": 1.2857142857142858 } } }, "docs": { "count": 107283077, "deleted": 1047418 }, "store": { "size": "530.2gb", "size_in_bytes": 569385384976, "throttle_time": "0s", "throttle_time_in_millis": 0 }, "fielddata": { "memory_size": "0b", "memory_size_in_bytes": 0, "evictions": 0 }, "query_cache": { "memory_size": "0b", "memory_size_in_bytes": 0, "total_count": 0, "hit_count": 0, "miss_count": 0, "cache_size": 0, "cache_count": 0, "evictions": 0 }, "completion": { "size": "0b", "size_in_bytes": 0 }, "segments": { "count": 3781, "memory": "2gb", "memory_in_bytes": 2174286255, "terms_memory": "1.7gb", "terms_memory_in_bytes": 1863786029, "stored_fields_memory": "105.6mb", "stored_fields_memory_in_bytes": 110789048, "term_vectors_memory": "0b", "term_vectors_memory_in_bytes": 0, "norms_memory": "31.9mb", "norms_memory_in_bytes": 33527808, "points_memory": "13.1mb", "points_memory_in_bytes": 13742470, "doc_values_memory": "145.3mb", "doc_values_memory_in_bytes": 152440900, "index_writer_memory": "0b", "index_writer_memory_in_bytes": 0, "version_map_memory": "0b", "version_map_memory_in_bytes": 0, "fixed_bit_set": "0b", "fixed_bit_set_memory_in_bytes": 0, "max_unsafe_auto_id_timestamp": 1632340789677, "file_sizes": { } } }, "nodes": { "count": { "total": 3, "data": 3, "coordinating_only": 0, "master": 1, "ingest": 3 }, "versions": [ "5.6.13" ], "os": { "available_processors": 192, "allocated_processors": 96, "names": [ { "name": "Windows Server 2019", "count": 3 } ], "mem": { "total": "478.4gb", "total_in_bytes": 513717497856, "free": "119.7gb", "free_in_bytes": 128535437312, "used": "358.7gb", "used_in_bytes": 385182060544, "free_percent": 25, "used_percent": 75 } }, "process": { "cpu": { "percent": 5 }, "open_file_descriptors": { "min": -1, "max": -1, "avg": 0 } }, "jvm": { "max_uptime": "1.9d", "max_uptime_in_millis": 167165106, "versions": [ { "version": "1.8.0_261", "vm_name": "Java HotSpot(TM) 64-Bit Server VM", "vm_version": "25.261-b12", "vm_vendor": "Oracle Corporation", "count": 3 } ], "mem": { "heap_used": "5gb", "heap_used_in_bytes": 5460944144, "heap_max": "5.8gb", "heap_max_in_bytes": 6227755008 }, "threads": 835 }, "fs": { "total": "1.7tb", "total_in_bytes": 1920365228032, "free": "499.1gb", "free_in_bytes": 535939969024, "available": "499.1gb", "available_in_bytes": 535939969024 }, "plugins": [ ], "network_types": { "transport_types": { "netty4": 3 }, "http_types": { "netty4": 3 } } } }

            The jvm.options file.

            ...

            ANSWER

            Answered 2021-Oct-08 at 06:38

            My issue is solved, It is due to the heap size issue, actually I am running the ES as service and the heap size is by default 2 GB and it is not reflecting. I just install the new service with the updated options.jvm file with heap size of 10 GB, and then run my cluster. It reflect the heap size from 2 GB to 10 GB. And my problem is solved. Thanks for the suggestions.

            to check your heap size use this command.

            Source https://stackoverflow.com/questions/69280083

            QUESTION

            My Server keeps crashing if User refreshes too much or Quick
            Asked 2021-Nov-12 at 08:09

            I'm at a loss and cant think of how to even google this.

            I'm building a MERN fullstack app as my very first solo project. Someone told me it was too big (they were right) that I would get burned out (I am). Well, Jokes on them. They were right.

            I have the server responding with a list of like, 4 users info from a database of like 4 users as a test. This is set up as a useEffect on the front end, not sure if that helps, people have told me it shouldnt have an effect on it.

            it works only half the time, if not less than that. Other times the server crashes like my Dogecoin investment, saying I've already used res.send, which makes me want to throw my computer and monitors out my window and scare my neighbor downstairs, which wouldnt be as bad since shes nosy as hell and always comments on my mail and thinks shes my mail man.

            Heres the Error on the Server Console

            ...

            ANSWER

            Answered 2021-Nov-12 at 07:42

            Probably not the cause, but this seems wrong, you probably meant to have curly brackets around res.send part.

            Source https://stackoverflow.com/questions/69938867

            QUESTION

            Elasticsearch crashing
            Asked 2021-Sep-27 at 16:35

            We're having issues with Elasticsearch crashing from time to time. It also sometimes spikes up RAM + CPU and server becomes unresponsive.

            We have left most of the settings as is, but had to add more RAM to JVM heap (48GB) to get it not to crash frequently.

            I started digging and apparently 32GB is the max you should be using. We'll tweak that.

            The server is:

            ...

            ANSWER

            Answered 2021-Sep-14 at 22:07

            a few things

            • high cpu or memory use will not be due to not setting those gateway settings, and as a single node cluster they are somewhat irrelevant
            • we recommend keeping heap <32GB, see https://www.elastic.co/guide/en/elasticsearch/reference/current/advanced-configuration.html#set-jvm-heap-size
            • you can never allocate replica shards on the same node as the primary. thus for a single node cluster you either need to remove replicas (risky), or add another (ideally) 2 nodes to the cluster
            • setting up a multiple node cluster on the same host is a little pointless. sure your replicas will be allocated, but if you lose the host you lose all of the data anyway

            I'd suggest looking at https://www.elastic.co/guide/en/elasticsearch/reference/7.14/bootstrap-checks.html and applying the settings it talks about, because even if you are running a single node those are what we refer to as production-ready settings

            that aside, do you have Monitoring enabled? what do your Elasticsearch logs show? what about hot threads? or slow logs?

            (and to be pendantic, it's Elasticsearch, the s is not camelcase ;))

            Source https://stackoverflow.com/questions/69184558

            QUESTION

            elasticsearch helm charts (from elastic or bitnami) don't work with k3s
            Asked 2021-Aug-31 at 11:34

            Gooday everybody.

            For a week I've been unsuccessfully trying to spin an elastic cluster on the latest k3s v1.21.3+k3s1. Both bitnami/elasticsearch and elastic elastic/elasticsearch don't work although with different errors.

            The thing is I've tried to spin an elastic cluster in a k3s on absolutely clean VMs:

            • Ubuntu 20.04
            • Ubuntu 21.04
            • Debian 10.10
            • from 1 core cpu, 4Gb of RAM and 30Gb storage to 4 cpu, 16Gb and 60Gb storage(at first I thought I might be a requirements issue)
            • from 1 node to a full k3s cluster with 3 nodes

            At the same time, both charts span like a charm the first time I tried them inside minikube. All the config always default. Please help, I've lost hope...

            Here is an error log of a master pod from the bitnami/elasticsearch chart:

            ...

            ANSWER

            Answered 2021-Aug-31 at 11:34

            Although "change the k3s version" is not an answer to the problem per se, this issue seems related to the specific v1.21.3+k3s1 version. I've tried the default installation of the bitnami/elasticsearch chart on these:

            Source https://stackoverflow.com/questions/68941079

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install nosy

            You can download it from GitHub.
            You can use nosy like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/pascalc/nosy.git

          • CLI

            gh repo clone pascalc/nosy

          • sshUrl

            git@github.com:pascalc/nosy.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link