• बिज़नस आईडिया
  • हमारा YouTube चैनल
  • सभी मशीन विक्रेताओं के नंबर
  • कंप्यूटर नेटवर्क
  • कंप्यूटर फंडामेंटल
  • इन्टरनेट एंड वेब टेक्नोलॉजी
  • हमारे बारें में
  • हमसे संपर्क करें

Logo

  • इन्टरनेट प्रोग्रामिंग

Data Representation in Hindi / डाटा रिप्रजेंटेशन क्या है?

आज के इस पोस्ट में हम आपको डाटा रिप्रजेंटेशन के बारें में विस्तार से बताएँगे. इसके साथ डाटा प्रोसेसिंग, डाटा, डाटा मापने की इकाई, डाटा स्टोरेज स्टेज इत्यादि को विस्तार से बताएँगे. डाटा रिप्रजेंटेशन के बारें में पूरी जानकरी के लिए पोस्ट को अंत तक जरुर पढ़ें.

डाटा रिप्रजेंटेशन क्या है? – Data Representation in Hindi

Data representation का अर्थ हैं कैसे हम किसी डाटा को represent करते हैं अर्थात् कैसे किसी डाटा को दर्शाते हैं, यहां पर डाटा representation दो शब्दों से मिलकर बना हैं डाटा+representation, यहां डाटा का मतलब हैं information से या कहें तो fact से, डाटा किसी भी form में हो सकता हैं जैसे audio, video, pictures, gif etc. और इन्हीं डाटा को किस तरह से represent किया जाए, ये डाटा का representation कहलाता हैं।

Computer में सभी डाटा मतलब audio, video, pictures ये सभी बाइनरी के फॉर्म में स्टोर किए जाते हैं computer में होने वाले इसी प्रोसेस को data representation कहते हैं।

डाटा क्या हैं ?

डाटा एक raw fact होता हैं जो अपने raw form में किसी काम का नहीं होता है. लेकिन उसी data को जब हम process और interpret करते हैं तब जाकर उनका सही मतलब सामने आता है, और जो की हमारे लिए बहुत उपयोगी होते हैं. इन्ही processed data को Information भी कहा जाता है. इसी information को computer में audio, video, pictures, MP3 के फॉर्म में save किया जाता है। जिसे हम डाटा कहते हैं।

Data Representation in Hindi

  • डाटा मापने की इकाई

Computer में कितना डाटा रख सकते है, उसे मापने के लिए कुछ स्टैंडर्ड का उपयोग करते हैं। डाटा को अलग अलग तरीके से मापा जा सकता हैं अर्थात् उसकी कैपेसिटी और space के हिसाब से उसे मापा जाता हैं जिसे लिए कुछ यूनिट्स use किए जाते हैं जैसे –

data unit types in hindi

Bit यानी ‘Binary Digit’, यह मापन की सबसे छोटी इकाई हैं इसमें एक बिट की वैल्यू केवल एक ही बाइनरी डिजिट हो सकती हैं चाहे वो 0 हो या 1. अर्थात् 1 bit = binary digit (0,1), इस तरह से कंप्यूटर में जितना अक्षर लिखेंगे उतना बीट का जगह मेमोरी में लेगा. एक Bit का सिर्फ एक ही मान हो सकता है। कंप्यूटर बाइनरी कोड्स की ही भाषा को समझता है। इन बाइनरी कोड्स को ही Bit कहा जाता है।

bit kya hai hindi

बिट दो तरह से ही जानकारी को सेव कर सकती है जैसे – On Or Off (0 Or 1) कंप्यूटर की सभी बड़ी से बड़ी और छोटी Activities बिट के द्वारा ही संपन्न होती है। Bit को English के Small Letter ‘b’ से दर्शाया जाता है।

  • कंप्यूटर में रजिस्टर क्या है (हिन्दी नोट्स)
  • फ्लोचार्ट क्या हैं?(हिन्दी नोट्स)
  • माउस क्या है इसके कार्य और प्रकार (हिन्दी नोट्स)

यह मापन की दूसरी सबसे छोटी इकाई हैं। यहां 4 bit = 1 nibble होता हैं अर्थात् 1 nibble की value 4 bit होती है।

nibble kya hai hindi

ये 8 बिट मैमोरी से मिलकर बनता हैं अर्थात् 8bit = 1byte, मतलब 1byte 2 nibble से मिलकर बना हैं। ये एक स्टैंडर्ड unit होती हैं मैमोरी की। अर्थात् कोई भी डाटा स्टोर करते हैं तो कम से कम 1 बाइट का स्पेस occupy करता ही हैं। बाइट information की 256 स्टेटस को स्टोर कर सकती हैं। computer में बाइट, बिट से आगे की इकाई हैं एक ‘B’ को हमेशा बाइट कहा जाता हैं। और स्मॉल ‘b’ का मतलब bit होता हैं।

byte kya hai hindi

यह 1024 बाइट से मिलकर किलोबाइट बनता हैं। Kilobytes को अक्सर इस्तमाल किया जाता है छोटे files के size को measure करने के लिए. उदाहरण के लिए, एक plain text document में होते हैं 10 KB की data और इसलिए इसकी एक file size होती है करीब 10 kilobytes की जितनी. यह माप अक्सर मेमोरी क्षमता और डिस्क स्टोरेज का वर्णन करने के लिए उपयोग किया जाता है।

kilobyte kya hai hindi

यहा megabytes का मतलब हैं 1024 KB अर्थात् 1024 kb मिलकर मेगाबाइट बनता है ,

Mb के पास KB के मुकाबले डाटा स्टोर करने की कैपेसिटी ज्यादा होती है। Megabyte का उपयोग अक्सर बड़ी फ़ाइलों के आकार को मापने के लिए किया जाता है। उदाहरण के लिए, एक High Resolution वाली JPEG इमेज फ़ाइल एक से पांच मेगाबाइट तक की हो सकती है।

megabyte in hindi

एक डिजिटल कैमरे से Uncompressed raw images को 10 से 50 एमबी डिस्क स्थान की आवश्यकता हो सकती है। एक Compressed format में सहेजा गया तीन मिनट का गीत आकार में लगभग तीन मेगाबाइट हो सकता है, मीडिया के अधिकांश अन्य रूपों की क्षमता, जैसे फ्लैश ड्राइव और हार्ड ड्राइव , को आमतौर पर गीगाबाइट या टेराबाइट्स में मापा जाता है।

यह 1024 मेगा बाइट मिलकर 1 गीगाबाइट होता है. यह MB के मुकाबले  GB का साइज बड़ा होता है। 1 GB 1024 MB के बराबर होता है। इसमें बड़ी फाइल्स कि स्टोरेज आ जाती हैं। अगर 1 जीबी की क्षमता की बात करें तो 230 Mp3 Songs को Store किया जा सकता है।

gigabyte in hindi

Terra byte (TB)

यह 1024 गीगाबाइट मिलकर एक टेराबाइट होता है.TB full form Terabyte होता है। Terabyte GB का के मुकाबले ज्यादा बड़ा होता है। बता दूं कि 1TB, 1024 GB से मिलकर बना होता है। इसमें बहुत सारा डाटा को स्टोर करने की क्षमता होती है।

terrabyte in hindi

Petabyte (PB )

यह 1024 TB मिलकर एक Peta byte  होता है. PB full form Petabyte होता है। 1024 TB और 1000000 GB के बराबर एक Petabyte होता है. इसका मतलब कि एक Petabyte 1024 TB से मिलकर बना हुआ होता है। लेकिन बता दू कि अभी तक इतनी बड़ी मात्रा में कोई भी device उपलब्ध नहीं है।

petabyte (PB) in hindi

Exabyte (EB)

यह 1024 PB  मिलकर एक EXA BYTE  होता है. यह बहुत बड़ी स्टोरेज यूनिट हैं इसमें बहुत अधिक मात्रा में डाटा स्टोर करके रखा जा सकता है या कहा जाए तो 5 Exabyte में हम पूरी मानव जाति द्वारा बोले गए सभी शब्दों को स्टोर कर सकते है।

exabyte (EB) in hindi

Zettabyte (ZB)

Zetta Byte (ZB) यह 1024 EB मिलकर एक ZETTA BYTE  होता है. 1024 EB = 1 ZB इसकी तुलना हम किसी से नहीं कर सकते क्योंकि ये बहुत ही ज्यादा बड़ा स्टोरेज प्रोवाइड कराता हैं।

zettabyte (ZB) kya hai hindi

Yettabyte (YB )

यह 1024 ZB मिलकर एक Yetta Byte  होता है.1024 ZB =1 YB.

yettabyte (YB) kya hai hindi

इनफार्मेशन क्या हैं? (Information kya hai)

किसी को कोई जानकारी बताना या सुनाना, या किसी माध्यम से उसके पास पहुँचाना ही Information कहलाता है।information एक बहुत ही जरूरी यूनिट होता हैं, किसी भी चीज की information के जरिए हम उसके बारे में जान पाते हैं और बेहतर जानकारी के लिए हम और भी information इकट्ठा करते हैं ताकि उसकी पूरी जानकारी हो सकें।Information एक प्रकार का डेटा होता है। जिसे हमारे द्वारा समझने में और उपयोग करने के अनुरूप बनाया जाता है। information के जरिए हम किसी काम को कैसे करना हैं उसकी जानकारी ले सकते हैं।

information kya hai hindi

  • कई महान व्यक्तियों ने Information को अलग-अलग प्रकार से व्यक्त किया।
  • एन बैल्किन के अनुसार — Information उसे कहा जाता हैं, जिसमें आकार को परिवर्तित करने की क्षमता होती है।
  • हाफमैन ने कहा — Information वक्तव्यों, तथ्यों अथवा आकृतियों का संकलन होती है।
  • जे बीकर का मानना है। – किसी विषय से सम्बंधित तथ्यों को ही Information कहते हैं।

Information की जरूरत सभी काम को बेहतर बनाने के लिए होती हैं। जब तक हमे इन्फोर्मेशन नही होगी हम किसी काम को proper नही कर सकतें। जैसे – हमने स्टूडेंट्स से कहा की project बनाना है तो जब तक हम उनको information नहीं देंगे की कैसे बनाना है क्या बनाना हैं. तो students कैसे बनाएंगे बिना किसी information के।

डाटाबेस क्या है? (Database)

Database एक ऐसा स्थान है जहां पर data को स्टोर करके रखा जाता हैं ताकि डाटा सुरक्षित रहें और कोई भी बाहरी लोग उसे ऐक्सेस ना कर पाए। तथा हमे जब भी जरूरत हो database से अपना data ले सकें, डाटाबेस में डाटा टेबल के फॉर्म में रखा जाता हैं। आजकल बहुत बड़े डाटा में काम होता हैं जैसे किसी बड़ी कंपनी में हजारों employs होते हैं उन सभी का डाटा अगर हमको manage करना हैं तो उसे database में स्टोर करके रख दीया जाता हैं और easily जब जरूरत हो ऐक्सेस कर लिया जाता हैं।

ठीक इसी तरह ई-कॉमर्स वेबसाइट जैसे Flipkart, Amazon आदि की हम बात करें तो वहां पर भी इसका उपयोग होता है। कस्टमर की जानकारी, product detail से लेकर हर एक जानकारी डेटाबेस में ही stored रहते हैं।

  • आउटपुट डिवाइस क्या है (हिन्दी नोट्स)
  • इनपुट डिवाइस क्या है (हिन्दी नोट्स)
  • सॉफ्टवेर क्या है और उसके प्रकार
  • CPU क्या है और कैसे काम करता है?
  • डाटा को कैसे स्टोर करते हैं?

Data को  सुरक्षित रखने के लिए हमें उसे स्टोर करना होता है. डाटा को स्टोर करने के लिए जरुरत पड़ती है स्टोरेज की. जब हम डाटा को स्टोर करके रखते हैं तो उसे आवश्यकतानुसार कभी भी उपयोग में ला सकते हैं. Physical World में डाटा को कागजों में लिखकर उसकी एक फाइल बनाकर स्टोर किया जाता है।

आज का युग Digital Marketing युग है, इसलिए अब डाटा को कागजों में स्टोर करने के बजाय कंप्यूटर के माध्यम से डाटाबेस में स्टोर किया जाता है. ताकि हम इसे कही से भी और कभी भी ऐक्सेस कर सकें।

इस Digital दुनिया में हम डाटा को 2 प्रकार से स्टोर कर सकते हैं।

  • Temporary Storage
  • Permanent Storage

#1 – Temporary Storage (अस्थायी भंडारण)

Temporary Storage में डाटा को Temporary रूप से RAM में स्टोर किया जाता है. इसमें Data Temporary रूप से स्टोर होता है. जब तक कंप्यूटर को Power Supply मिलती है तो RAM में डाटा Temporary रूप से स्टोर होता है. Power Supply बंद होने पर RAM में स्टोर डाटा भी Delete हो जाता है. जब भी हम Current Time में कंप्यूटर में कोई कार्य करते हैं तो उसका डाटा RAM में स्टोर रहता है.

#2 – Permanent Storage (स्थायी भंडारण)

Permanent Storage में डाटा को हमेशा के लिए स्टोर किया जाता है. डाटा को Permanent स्टोर करने के लिए हार्ड डिस्क ड्राइव, SSD आदि के इस्तेमाल करते हैं. इसके अलावा कुछ External Device जैसे कि पैन ड्राइव, मेमोरी कार्ड आदि में भी डाटा को Permanent Store किया जाता है.

अगर आपके पास कोई महत्वपूर्ण डाटा है तो आप उसे Permanent Store कर सकते हैं ताकि जब आपको जरूरत पड़े तो आप उस डाटा को Access कर सकें.

डाटा कितने प्रकार के होते है? (Data Types)

डाटा अलग अलग प्रकार के होते हैं जैसे audio, video, pictures, gif आदि

  • Alphabetic data (अक्षरात्मक डाटा) – ये डाटा alphabets (अक्षर) में होते हैं। ये अक्षरों के समूह से बनते हैं। इसमें सिर्फ alphabets होते हैं numbers नहीं होते। जैसे – A,B,C,D आदि।
  • Numeric data (संख्यात्मक डाटा) – ये डाटा numbers में होता हैं अर्थात् ये numerical (संख्यात्मक ) होता हैं । जैसे – 1,2,3,4 आदि।
  • Video data (विडियो डाटा)- ये डाटा वीडियो फॉर्म में होता हैं अर्थता ये वीडियो वाले डाटा होते हैं, जैसे की video clip, movie आदि।
  • Alpha numeric data (चिन्हात्मक डाटा) – इसमें डाटा special characters के रूप में होता हैं। उसे चिन्हात्मक डाटा कहते हैं, जैसे- @,#,$ आदि।
  • Graphical data (ग्राफिकल डाटा)-   ये डाटा ग्राफिकल रूप में होता हैं. इसमें ग्राफिक्स उपयोग किए जाते हैं इसलिए इसे ग्राफिकल data कहते हैं, जैसे – image, pictures आदि।
  • Sound data (ध्वनि डाटा) – ये डाटा ध्वनि के रूप में होता है. इसे ध्वनि डाटा कहते है। जैसे – गाने, ऑडियो आदि।

डाटा प्रोसेसिंग क्या हैं ? (Data Processing)

Data processing एक ऐसी प्रक्रिया हैं जिसमे raw डाटा को check किया जाता हैं ताकि वह आगे प्रोसेस की जा सके या आगे जिसको उसकी जरूरत हैं वह उसे उपयोग कर सके data के रुप में। ये process डाटा साइंटिस्ट लोग करते हैं, जिससे डाटा की सही तरीके से जांच की जा सके। डाटा scientist एक्सपर्ट होते हैं जिससे कोई गलती ना हों,ताकि आगे प्रोसेसिंग में दिक्कत ना आए। इसी प्रोसेस को हम डाटा प्रोसेसिंग कहते हैं।

डाटा को Process करने के लिए सबसे पहले हम किसी भी Data को Collect करते हैं Filter करते हैं तथा उसे Short भी करते हैं उसके बाद उस data का प्रोसेस करते हैं और इसके बाद उस डाटा को स्टोर किया जाता है।

डाटा प्रोसेसिंग के स्टेज (Stage)

डाटा प्रोसेसिंग  पहले manual तरीके से किया जाता था जिससे बहुत अधिक टाइम लग जाया करता था तथा errors की संभावना रहती थी और समय भी अधिक लगता था। लेकिन अब ये काम computer automated तरीको का use किया जा रहा हैं  जिसमें data processing बहुत फास्ट होता हैं तथा errors की संभावना भी कम हो जाती हैं। डाटा प्रोसेसिंग निम्न stages में किया जाता हैं –

  • Data collection

Preparation

Data collection.

डाटा कलेक्शन Data Processing करने की सबसे पहली प्रक्रिया है इसमें हम अपने Raw Data को अलग-अलग माध्यम से Collect करते हैं और हम यह सुनिश्चित करते हैं कि Data सही और विश्वसनीय है या नही। और जब चेक कर लेते हैं तो आगे प्रोसेस में डाल देते हैं।

डाटा Preparation को हम Data Cleaning भी कहते हैं इस Process में हम अपने Raw Data को Short करते हैं  जिससे उसमे जो unnecessary data होता हैं उसे remove कर देते हैं तथा उसे Filter करते हैं और फिर हमारा यह Data अगले Step के लिए तैयार हो जाता है।

इस प्रक्रिया में हम Filter किए गए Data को Computer के अंदर मशीनी भाषा में Enter करते हैं यानी इस Data को Processing करने वाले Program के अनुसार तैयार करते हैं ताकि यह Processing के लिए आसानी से तैयार हो सके और Data Processing करने में काफी आसानी हो।

इस Step में सबसे पहले Input किये गए Data की जांच की जाती है और डाटा को अर्धपूर्ण जानकारी के लिए तैयार किया जाता है। इसमें Data Processing के लिए मशीन लर्निंग और आर्टिफिशियल इंटेलिजेंस एल्गोरिथम का Use किया गया है जिससे हमें एक अच्छा Output मिल सके।

इस Step में Process किए गए Data का परिणाम हमें प्राप्त होता है यानी Process किए गए Raw Data की अर्धपूर्ण जानकारी हमें दिखाई देती है। इस Output को User अलग-अलग फॉर्मेट में ( जैसे Graph, Table, Audio, Video, Document आदि) के रूप में देख सकता है।

ये डाटा प्रोसेसिंग का सबसे last stage है यहां पर हम प्रोसेस किए डाटा को अपने future use के लिए स्टोर करके रखते हैं। यहां ये डाटा safely store रहता है ताकि हमें जब भी जरूरत हैं इसे use कर सकते हैं।

डाटा प्रोसेसिंग के क्या विधि है? (Data Processing Method)

data processing निम्न तरीकों से किया जा सकता हैं .

Manual data processing

Mechanical data processing, batch processing, real time processing, data mining.

Manual डाटा प्रोसेसिंग एक ऐसी प्रोसेसिंग तकनीक हैं जिसमे डाटा मैनुअली प्रोसेस होता हैं यहां किसी भी tools या डिवाइस से नहीं की जाती बल्कि यहां डाटा प्रोसेसिंग कुछ software की मदद से की जाती हैं जैसे calculations, logical operations के हेल्प से डाटा प्रोसेसिंग की जाती हैं।

Mechanical डाटा प्रोसेसिंग में डाटा को मैकेनिकल device की मदद से प्रोसेस किया जाता हैं जैसे type writer, प्रिंटर आदि से। ये काफी fast होता हैं जिससे समय की बचत होती हैं और accurate डाटा मिल जाता हैं।

बैच प्रोसेसिंग (Batch Processing) में डाटा एक निश्चित समयावधि में संकलित (Collected) किया जाता है और इस डाटा पर प्रक्रिया बाद में एक बार में होती है, यह डाटा प्रोसेसिंग की बहुत पुरानी विधि हैं। जिससे बहुत कम समय में बहुत सारे डाटा में काम हो जाता हैं। बैच प्रोसेसिंग सिस्टम में प्रत्येक user अपना प्रोग्राम ऑफ-लाइन में तैयार करता है और फिर उसे कम्प्यूटर सेंटर को दे देता है।

Real time processing का उपयोग तब किया जाता है जब हमे रिजल्ट तुरंत चाहिए होता हैं, यह प्रोसेस बहुत जल्दी रिजल्ट देता हैं तथा कोई काम को continue चल रहा हो उसके लिए इस प्रकार के system का use किया जाता हैं।

ये एक ऐसा प्रोसेस हैं जिसमे डाटा को माइनिंग किया जाता हैं अर्थात् डाटा को खोज करके निकाला जाता हैं, जिससे आगे उसको प्रोसेस किया जा सके। और डाटा को filter करके निकाला जा सके। यह एक बहुत ही important पार्ट होता हैं डाटा प्रोसेसिंग का।

  • कंप्यूटर नंबर सिस्टम क्या है – हिन्दी नो ट्स
  • ऑपरेटिंग सिस्टम क्या है? और कैसे काम करता है?

कंप्यूटर में डाटा प्रेजेंटेशन क्या है?

कंप्यूटर में डाटा प्रेजेंटेशन डाटा को रिप्रेजेंट करने का एक तरीका है. जिसमे डाटा को प्रस्तुत किया जाता है. डाटा को ग्राफ, इमेज या विसुअल रूप में दिखाना ही डाटा का प्रेजेंटेशन है.

डाटा कितने प्रकार के होते हैं?

डाटा 6 प्रकार के होते है. डाटा अलग अलग प्रकार के होते हैं जैसे audio, video, pictures, gif आदि Alphabetic data (अक्षरात्मक डाटा) जैसे – A, B, C, D आदि। Numeric data (संख्यात्मक डाटा) – जैसे – 1,2,3,4 आदि। Video data (विडियो डाटा)- जैसे की video clip, movie आदि। Alpha numeric data (चिन्हात्मक डाटा) – जैसे- @,#,$ आदि। Graphical data (ग्राफिकल डाटा)-   जैसे – image, pictures आदि। Sound data (ध्वनि डाटा) – जैसे – गाने, ऑडियो आदि।

  • डाटा क्या हैं?

इनफार्मेशन के समूह को डाटा कहा जाता है जो एक रॉ फैक्ट होता है. डाटा को प्रोसेस करके इन्टरप्रेट करने पर उसका अर्थ पता चलता है.

डेटा प्रतिनिधित्व में कितने नंबर सिस्टम का उपयोग किया जाता है?

डेटा प्रतिनिधित्व के लिए बाइनरी नंबर सिस्टम का उपयोग किया जाता है. बाइनरी नंबर सिस्टम का बेस 2 होता है. इसमें डाटा को रिप्रेजेंट करने के लिए (01) का उपयोग किया जाता है.

अधिक जानकरी के लिए विडियो देखें :-

आज आपने सिखा

तो दोस्तों आपको ये लेख Data Representation in Hindi ( डाटा रिप्रजेंटेशन क्या है?) कैसा लगा आप मुझे कमेंट करके जरुर बताएं. यदि यह लेख Data Representation in Hindi ( डाटा रिप्रजेंटेशन क्या है?) आपको पसंद आया हो तो आप इसे लाइक और शेयर जरुर करें. यदि इस लेख Data Representation in Hindi ( डाटा रिप्रजेंटेशन क्या है?) से जुड़े कोई सवाल या सुझाव् है तो आप मुझे कमेंट कर सकते हो.

इसी प्रकार के टेक्नोलॉजी से जुड़े लेख, कंप्यूटर नोट्स और बिज़नस आइडियाज की जानकारी के लिए मेरे अन्य वेबसाइट nayabusiness.in और YouTube चैनल computervidya चैनल में विजिट जरुर करें.

  • Data Representation in Hindi
  • इनफार्मेशन क्या हैं?
  • डाटा कितने प्रकार के होते है?
  • डाटा प्रोसेसिंग के क्या विधि है?
  • डाटा प्रोसेसिंग के स्टेज
  • डाटा प्रोसेसिंग क्या हैं?
  • डाटा रिप्रजेंटेशन क्या है?
  • डाटाबेस क्या है?

Keyboard in Hindi / कीबोर्ड क्या है? Keyboard kya hai – हिन्दी नोट्स

हाई लेवल लैंग्वेज क्या हैं परिभाषा और उदाहरण (हिन्दी नोट्स) – what is high..., leave a reply cancel reply.

Save my name, email, and website in this browser for the next time I comment.

  • Chat GPT क्या है और काम कैसे करता है?
  • घरेलु महिलाओं के लिए 25 बेस्ट बिज़नस
  • क्लाउड कम्प्यूटिंग क्या हैं? (हिन्दी नोट्स)
  • OSI Model क्या है? विस्तार से समझाइए।
  • IP Address क्या है और कैसे काम करता है?
  • Virus क्या है और कितने प्रकार के होते है?
  • Cryptography क्या है? (हिन्दी नोट्स)
  • गोबर से जुड़े 15 बेस्ट बिज़नस आइडियाज
  • मशरूम की खेती कैसे शुरू करें?
  • कम्प्युटर का इतिहास और विकास
  • Microprocessor क्या है? (हिन्दी नोट्स)
  • कंप्यूटर की सभी 6 पीढ़ियां, विशेषता, कमियाँ
  • कंप्यूटर नंबर सिस्टम क्या है - हिन्दी नोट्स
  • नेटवर्क डिवाइस क्या है और कितने प्रकार के होते है?
  • बेसिक इन्टरनेट टर्मिनोलॉजी-हिंदी नोट्स
  • Privacy Policy
  • Terms and Conditions

thesciencevision

Data Representation in Hindi | डाटा रिप्रजेंटेशन क्या है?

Data Representation in Hindi | डाटा रिप्रजेंटेशन क्या है?

  • 1.1 Definition of Data Representation –
  • 2 एनालॉग क्रियाएँ  (Analog Operation) –
  • 3 बाइनरी या द्वि-आधारी संख्‍या प्रणाली (Binary Number System) –
  • 4 दशमलव या दाशमिक संख्‍या प्रणाली(Decimal Number System)-
  • 5 ऑक्‍टल या अष्‍ट –आधारी संख्‍या प्रणाली(Octal Number System)-
  • 6 हेक्‍सा-डेसीमल या षट्दशमिक संख्‍या प्रणाली (Hexa-decimal Number System) –

Introduction –

Data Representation क्रमश: दो शब्‍दों से मिलकर बना है पहला Data जिसे हम आसान शब्‍दों में कहें तो डिजिटल Information या जानकारी कहते हैं । तथा Representation का अर्थ  निरूपण, दर्शाना या वर्णन करना होता है ।

कम्‍प्‍यूटर में हम विभिन्‍न प्रकार के डाटा जैसे कि audio, video, text, graphics numeric आदि को स्‍टोर करते है । चूं‍कि कम्‍प्‍यूटर एक मशीन है जो human language नहीं समझता है ।  वह यूज़र द्वारा दिये गये अलग-अलग निर्देशों तथा डाटा को एक ही भाषा में संग्रहित करता है । जो कि 0 व 1 होती है जिसे हम बाइनरी लैंग्‍वेज कहते है ।

Definition of Data Representation –

कम्‍प्‍यूटर या इलेक्‍ट्रॉनिक डिवाइस में यूज़र द्वारा दिये गये सभी प्रकार के डाटा व निर्देश 0 व 1 इन दो अंको में परिवर्तित हो जाते हैं । इस प्रक्रिया को ही Data Representation कहते हैं ।   अर्थात् यूज़र द्वारा Input किया गया Data कम्‍प्‍यूटर जिस रूप में (0,1) ग्रहण करता है उसे Data Representation कहते हैं ।

Data Representation करने की दो क्रियायें है ।

  • एनालॉग क्रियाएँ (Analog Operation)
  • डिजिटल क्रियाएँ (Digital Operation)

एनालॉग क्रियाएँ  (Analog Operation) –

वे क्रियाएँ जिनमें अंको का प्रयोग नहीं किया जाता है, एनालॉग क्रियाएँ कहलाती है । एनालॉग क्रियाएं भौतिक मात्राओं जैसे- दाब, ताप, आयतन, लम्‍बाई आदि को उनके पूर्व परिभाषित मानों के एक वर्णक्रम के साथ परिवर्तनीय बिन्‍दुओं में व्‍यक्‍त किया जाता है । एनालॉग क्रियाओं का प्रयोग मुख्‍यत: इन्‍जीनियरिंग तथा विज्ञान के क्षेत्रों में किया जाता है ।

Example – स्‍पीडामीटर, थर्मामीटर, वोल्‍टमीटर, इत्‍यादि एनालॉग क्रियाओं के उदाहरण है ।

डिजिटल क्रियाएँ  (Digital Operation) –

आधुनिक कम्‍प्‍यूटर डिजिटल इलेक्‍ट्रॉनिक परिपथ से निर्मित होते हैं । इस परिपथ का मुख्‍य भाग ट्रांजिस्‍टर होता है । जो दो अवस्‍थाओं  क्रमश: 0,1 के रूप में  कार्य करता है ।

कम्‍प्‍यूटर में डाटा  को इन दो अवस्‍थाओं 0 व 1 के रूप में व्‍यक्‍त करते है तथा इन दो अंको या अवस्‍थाओं के सम्‍मलित रूप को बाइनरी संख्‍या-प्रणाली कहते है जिसे इंग्‍लिश में Binary Number System कहते हैं । Binary Number System को संक्षिप्‍त में bit कहा जाता है ।

कम्‍प्‍यूटर में डाटा की सबसे छोटी इकाई bit कहलाती है जो कि दो अंको के समूह 0 व 1 से मिलकर बनी होती है ।

4 बिट्स – 1 निबल

1024 बाइट्स – 1 किलोबाइट (KB)

1024 किलोबाइट्स  – 1 मेगाबाइट (MB)

1024 मेगाबाइट्स – 1 गीगा बाइट्स (GB)

1024 गीगाबाइट्स – 1 टेराबाइट (TB)

बाइनरी या द्वि-आधारी संख्‍या प्रणाली (Binary Number System) –

Binary Number System जैसा की नाम से ही स्‍पष्‍ट है कि इसमें binary (जिसका अर्थ दो होता है) अंको 0 व 1 का प्रयोग होता है । इस प्रणाली में केवल दो अंक 0 (शून्‍य) व 1 (एक) का प्रयोग होता है जिस कारण इसे द्वि-आधारी प्रणाली भी कहते हैं । यह एक स्विच की तरह कार्य करती है जिसमें केवल दो स्थिति होती है एक ऑन की और दूसरी ऑफ की, इसके अतिरिक्‍त तीसरी स्थिति संभव नहीं है । इस आधार पर ही कम्‍प्‍यूटर संख्‍या प्रणाली में 0 (शून्‍य) का अ‍र्थ ऑफ से तथा 1 (एक) का अर्थ ऑन से लगाया जाता है । बाइनरी का अर्थ दो होने के कारण उसके स्‍थानीय मान दाईं से बाई ओर क्रमश: दोगुने होते जाते हैं । अर्थात् 2, 4, 8, 16, 32, 64 आदि ।

दशमलव या दाशमिक संख्‍या प्रणाली(Decimal Number System)-

दैनिक जीवन में उपयोग होने वाली संख्‍या प्रद्धति को दशमिक या दशमलव संख्‍या प्रणाली कहा जाता है । Decimal Number System में 0, 1, 2, 3, 4, 5, 6, 7, 8 व 9 दस संकेत मान होते हैं । जिस कारण इस संख्‍या प्रणाली का आधार 10 होता है ।

Decimal Number System का स्‍थानीय मान संख्‍या के दायीं से बायीं दिशा में आधार 10 की घात के क्रम में बढ़ते हुये होता है । दशमलव प्रणाली के स्‍थानीय मान क्रमश: निम्‍न प्रकार है ।

इस उदाहरण से स्‍पष्‍ट है कि दशमलव संख्‍या प्रणाली में स्‍थानीय मान दायीं ओर से बायीं ओर 10 के घात के रूप  में बढ़ते जाते हैं ।

इसी प्रकार दशमलव बिन्‍दु के दाई ओर स्‍थानीय में 10 की घातों के रूप में ही घटते जाते हैं । जैसे –  1/10, 1/100, 1/1000, 1/10000 आदि । किसी भी संख्‍या के वास्‍तविक मान का पता करने के लिये उसके प्रत्‍येक अंक के मुख्‍य मान को उसके स्‍थानीय मान से गुणा करते हैं और उन्‍हें जोड़ लेते हैं ।

ऑक्‍टल या अष्‍ट –आधारी संख्‍या प्रणाली(Octal Number System)-

Octal Number System प्रणाली में 0, 1, 2, 3, 4, 5, 6, 7 इन आठ अंको का उपयोग किया जाता है । आठ अंको का प्रयोग होने के कारण ही इसका आधार आठ होता है । इन अंको के मुख्‍य मान दशमलव संख्‍या प्रणाली की तरह ही होते है । ऑक्‍टल संख्‍या प्रणाली में किसी भी बाइनरी संख्‍या को छोटे रूप में लिख सकते है । इसलिये ऑक्‍टल संख्‍या प्रणाली का उपयोग सुविधाजनक होता है ।

ऑक्‍टल संख्‍या प्रणाली का उपयोग मुख्‍यत: माइक्रो कम्‍प्‍यूटर में किया जाता है ।आधार आठ होने के कारण ऑक्‍टल संख्‍या प्रणाली में अंको के स्‍थानीय मान दायीं ओर से बायीं ओर क्रमश: आठ गुने होते जाते हैं, अर्थात् 1, 8, 64, 512 आदि  ।

ऑक्‍टल संख्‍या का उदाहरण – (144) 8

Note – कोई संख्‍या बाइनरी में है अथवा डेसिमल में या ऑक्‍टल में लिखी गयी है इसे प्रदर्शित करने के लिये संख्‍या को कोष्‍ठक में लिखकर उसके दाई ओर नीचे उस संख्‍या का आधार लिख दिया जाता है । जिसे हम पहचान लेते हैं कि वह संख्‍या किस System के अंतर्गत लिखी गयी है ।

बाइनरी संख्‍या प्रणाली (101) 2  

दशमलव संख्‍या प्रणाली (100) 10

ऑक्‍टल संख्‍या प्रणाली (144) 8  आदि ।

हेक्‍सा-डेसीमल या षट्दशमिक संख्‍या प्रणाली (Hexa-decimal Number System) –

हेक्‍सा-डेसीमल या षट्दशमिक संख्‍या प्रणाली जैसे कि नाम से ही स्‍पष्‍ट है कि हेक्‍सा-डे‍सीमल दो शब्‍दों से मिलकर बना हुआ है । हेक्‍सा + डेसीमल  हेक्‍सा का तात्‍पर्य छ: तथा डेसीमल से तात्‍पर्य दस से होता है । अत: इस संख्‍या प्रणाली में कुल 16 अंको होते हैं ।  जो निम्‍न प्रकार से है 0,1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. हेक्‍सा-डेसीमल संख्‍या प्रणाली में अंको के स्‍थानीय मान दायीं ओर से बायीं ओर 16 के गुणको में बढ़ते जाते हैं ।

हेक्‍सा-डेसीमल का उदाहरण – (F6A4) 16

Compter Language कितने प्रकार की होती हैं ?

Computer Memory क्‍या है प्रकार ? 

Operation System किसे कहते हैं । 

Computer and IT नोट्स हिंदी में

Data Representation in hindi-डाटा रिप्रजेंटेशन क्या है?

हेल्लो दोस्तों आज के इस पोस्ट में आपको data representation in hindi  के बारे में बताया गया है की क्या होता है कैसे काम करता है तो चलिए शुरू करते है

data reperesentation  का परिचय

information को विभिन्न रूपों जैसे की text,numbers ,images ,audio,video में आता है

data communication में text कोएक bit pattern ,जोकि bits( 0s अथवा 1s ) की एक sequence होती है जो की रूप में represent किया जाता है bit pattern के विभिन्न sets को text symbols में represent करने के लिए design किया गया है

bit pattern का प्रत्येक set को कोड code कहा जाता है और text symbols को represent करने की process को coding कहा जाता है present में प्रचलित coding system को unicode कहा जाता है जिसमे विश्व की किसी भी language में प्रयोग किये जाने वाले किसी symbols अथवा characters को represent करने के लिए ही 32 bits का प्रयोग किया जाता है

ASCII( american standard code for information interchange ) को कुछ दर्शको पूर्व में united state में विकसित किया गया था

इसे भी जाने –

  • Network Criteria in hindi-नेटवर्क क्राइटेरिया क्या है?
  • What is SSL full form in hindi-ssl फुल फॉर्म क्या है?
  • Multiplexing in hindi-मुल्तिप्लेक्सिंग क्या है?
  • What is IPV6 in hindi?-IPV6 क्या है?

numbers को भी bit pattern के द्वारा ही represent किया जाता है numbers को represent करने के लिए किसी code जैसे की ASCII का प्रयोग नहीं किया जाता है mathematical operations को simple बनाने के लिए ,numbers को सीधे सीधे binary number में परिवर्तित किया जाता है

विभिन्न numbering systems है –

binary number system ,decimal number system ,hexadecimal number system ,octal number system

चुकी computer केवल binary number को ही समझता है अत: data communication के लिए ही अन्य number system के number को binary number system में परिवर्तित किया जाता है

images को भी bit pattern के द्वारा ही represent किया जाता है इसके सरलतम रूप में image pixels की matrix के द्वारा बनी होती है जहा pixel एक छोटा बिंदु अथवा dot होता है इस dot का आकार  resolution पर ही निर्भर करता है image के बेहतर representation के लिए image का resolution को बेहतर होना चाहिए

परन्तु इसे स्टोर करने के लिए अधिक मेमोरी की आवश्यकता होती है image को pixels में विभाजित करने के उपरांत प्रत्येक pixel को एक bit pattern में aasign किया जाता है pattern का आकार और मान image पर निर्भर करता है

black and white dots से बनी image को represent करने के लिए 1-bit pattern पर्याप्त होती है grayscale image को represent करने के लिए 2-bit pattern का प्रयोग किया जा सकता है इसमे black pixel को 00 ,dark gray pixel को 01 ,light gray pixel को 10 ,और white pixel को 11 के द्वारा represent किया जा सकता है

RGB image में प्रत्येक रंग तीन प्राथमित रंगों –red ,green और blue के विभिन्न संयोजनों से बनता है प्रत्येक रंग की intensity को मापकर उसे एक bit pattern को assign किया जाता है इसी प्रकार आप YCM image ,जिसमे अन्य तीन प्राथमिक रंगों-पीला(yellow),स्यान(cyan) और magenta(मजेंटा) का प्रयोग किया जाता है

और CMYK image,जिसमे चार रंगों-स्यान(cyan),magenta(मजेंटा),पीला(yellow) और black(काला) रंगों का प्रयोग किया जाता है तथा प्रत्येक के लिए भी प्रत्येक रंग की intensity को मापकर उसे के bit pattern को assign किया जाता है

audio sound अथवा music की recording अथवा broadcasting को दर्शाती है audio स्वभाव से text ,number और image से भिन्न होती है यह continuous होती है जब हम किसी audio को electronically record अथवा broadcast करते है तो इनको आप digital signals में परिवर्तित किया जाता है digital signals की दो ही state होती है  0 और 1

जिन्हें पृथक पृथक दो voltage से अभिव्यक्त किया जाता है

video picture अथवा movies की recording अथवा broadcasting को दर्शाया है video को एक continuous entity के रूप में एक video camera के द्वारा तैयार किया जा सकता है अथवा यह विभिन्न images का एक combination हो सकता है प्रत्येक continuous entity इस प्रकार व्यवस्थित होती है की एक गति का आभास होता है और जब हम किसी video को electronically record अथवा broadcast करते है तो इनको digital signals में परिवर्तित किया जाता है

data representation in hindi

reference- https://www.tutorialspoint.com/computer_concepts/computer_c

निवेदन -अगर आपको यह आर्टिकल(data representation in hindi ) अच्छा लगा हो तो आप इस पोस्ट को अपने दोस्तों के साथ जरुर शेयर() करे और आपको जिस टॉपिक(data representation in hindi ) पर आपको पढना या नोट्स चाहिए तो हमें जरुर कमेंट करे आपका कमेंट्स हमारे लिए बहु मूल्य है धन्यवाद

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Voice speed

Text translation, source text, translation results, document translation, drag and drop.

data representation word in hindi

Website translation

Enter a URL

Image translation

Hindi Me Jankari

Data क्या है और इसके प्रकार?

Photo of author

डाटा क्या है (What is Data in Hindi) ? आप कोई भी background से क्यूँ न हो, हम सभी ने कभी न कभी तो data शब्द का इस्तमाल जरुर किया होगा। लेकिन उसके वाबजूद भी हमारे मन में कई बार ये सवाल जरुर उठता है की आखिर में ये Data क्या है, और सभी जगहों में इस data को इनता ज्यादा महत्वपूर्ण क्यूँ माना जाता है।

यदि आप भी data के सच्चे अर्थ के बारे में जानना चाहते हैं तब आपको ये article Data क्या है और इसके types क्या हैं जैसे सभी जानकारी जो की Data से जुड़ी हुई हो उसे मैंने इस article के जरिये आपको समझाने की कोशिश करी है।

वैसे data सिर्फ एक computer से सम्बंधित term नहीं है बल्कि data plain facts को कहा जाता है. ये शब्द ‘ data ’ plural होता है ‘ datum ’ का. ये data कुछ भी हो सकता है जैसे की किसी देश की आबादी, अस्पतालों में मरीजों की संख्या, किसी school का ठीकाना इत्यादि। ये सभी चीज़ें इनके natural form में organized या structured नहीं होती है इसलिए इनका ज्यादा इस्तमाल नहीं किया जा सकता।

वहीँ अगर इसी data को processes, organized, structured कर present किया जाये किसी एक particular context में उन्हें useful बनाने के लिए तब इसे Information कहा जाता है. ये तो बस एक simple definition थी data और information की, पूरी details में जानने के लिए आपको यह article Data क्या है? पूरा पढना होगा।

डाटा क्या है (What is Data in Hindi)

Data को हम ऐसे कह सकते हैं की ये एक representation होता है facts, concepts, या instructions का एक formalized manner में, जो की suitable होता है communication, interpretation, या processing के लिए इन्सान या electronic machine के द्वारा।

Data Kya Hai Hindi

Data को हम characters के मदद से represent कर सकते हैं जैसे की alphabets ( A-Z, a-z ), digits (0-9) या कोई special characters ( +,-,/,*,<,>,= ) इत्यादि।

ये data कुछ भी हो सकता है कोई character, text, numbers, pictures, sound, या फिर video भी. वहीँ अगर data को कोई context में डाला न गया तब इसका कोई काम नहीं होता है चाहे वो किसी इन्सान के लिए या फिर कोई computer के लिए।

Data अपने raw form में किसी काम का नहीं होता है. लेकिन उसी data को जब हम process और interpret करते हैं तब जाकर उनका सही मतलब सामने आता है, और जो की हमारे लिए बहुत उपयोगी होते हैं. इन्ही processed data को Information भी कहा जाता है।

Analog vs. Digital Data

Data को represent करने के दो general ways होते हैं : Analog और digital. Analog data प्राय तोर से continuous होते हैं – ये ‘ analogous’ होते हैं उनके actual facts के प्रति जिन्हें की ये represent करते हैं. Digital data बहुत ही discrete और उन्हें broken up किया जाता है limited number of elements में. उदहरण के लिए Nature (प्रकृति) analog होता है, वहीँ computers digital होते हैं।

  • फोटोशोप क्या है और कैसे चलाते हैं
  • सॉफ्टवेर क्या है और इसके प्रकार
  • एनिमेशन क्या है और कैसे बनाये

हमारे natural world प्राय चीजें continuous होती हैं nature में. उदहरण के लिए, आप इन्द्रधनुष के colors को देख सकते हैं. इसमें इन्द्रधनुष continuous होता है और infinite number के shades प्रदान करता है. वहीँ Computer systems में, वो continuous तो होते हैं लेकिन finite होते हैं. वो सभी data जिन्हें की आप binary digits में store करते हैं, इनमें ये limit है की कितने data को represent किया जा सकता है।

Data के प्रकार

Computer systems काम करते हैं अलग अलग प्रकार के digital data के साथ।

Computing के पहले के दिनों में data primarily केवल text और numbers ही हुआ करता था; लेकिन वहीँ modern day computing की बात करें तब, अभी बहुत सारे प्रकार के multimedia data हैं, जैसे की audio, images, graphics और video. लेकिन ultimately, सभी data types को binary digits के हिसाब से ही store किया जाता है।

प्रत्येक data type के लिए, कुछ बहुत ही specific techniques होते हैं उन्हें convert करने के लिए binary language के बिच computers में और उन्हें कैसे हम अपने senses से interpret करें उन data को जैसे की sight और sound।

डाटाबेस क्या है

हम data के बारे में ज्यादा बोल नहीं सकते बिना database का नाम लिए. हाँ एक database एक organized collection of data होता है. Data को ऐसे ही किसी list में random order में न डालकर एक database के मदद से उन्हें एक structure प्रदान किया जाता है, उन data को organize करने के लिए।

एक बहुत ही common data structures होता है database table. इस table में मुख्य रूप से rows और columns होते हैं. प्रत्येक row को typically एक record कहा जाता है, वहीँ प्रत्येक column को typically एक field कहा जाता है।

Information क्या है?

Information एक ऐसा प्रकार का data होता है जिसे की पूरी तरह से process किया गया होता है कुछ इसप्रकार से की वो बहुत ही meaningful होता उस person के लिए जो की इसे receive करते हैं. ये कोई भी चीज़ हो सकता है जिसे की communicate किया जा सके।

जहाँ Data raw facts को कहा जाता है वहीँ information processed data को कहा जाता है. उदहरण के लिए किसी class के students के subject marks, roll number, age, rank इत्यादि को data कहा जा सकता है।

वहीँ अगर आपको कहा जाये की उन students में से best 5 students के maths के marks को लाया जये तब आपको पहले उन students के सभी data को categorize करना होगा और फिर उसे process कर ही आप मांगे गए data को प्रदान कर सकते हैं. यहीं तो data आप results के तोर पर पाते हैं उसे information कहते हैं ।

Information बहुत ही organized और classified data होता है, जिसकी कुछ meaningful values होती है receiver के लिए. Information एक प्रकार का processed data होता है जिसके ऊपर decisions और actions based होता है।

Decision को meaningful बनाने के लिए, processed data must qualify करने चाहिए कुछ characteristics, जो की हैं

यदि किसी processed data में ये सभी characteristics होते हैं तब उन्हें ही असल में Information कहा जाता है.

Data कैसे Store किया जाता है?

Data और Information को typically computer में store करने के लिए hard drive या कोई दूसरा storage device का इस्तमाल किया जाता है।

Data जो की computer memory/storage में store किया जाता है उन्हें मुख्य रूप से दो हिस्सों में categorized किया जाता है।

1. Permanent storage ( Hard disk / Hard drive) 2. Temporary storage (RAM – Random Access memory )।

इन दोनों में मुख्य अंतर है वो ये की permanent storage data को retain करता है power failure के case में भी, ये तब तक उसे retain कर सकता है जब तक की आप उसे intentionally delete न कर दें वहीँ temporary memory data तुरंत ही lost हो जाते हैं जब power failure होता है और इसे automatically manage किया जाता है computer के CPU के द्वारा।

Temporary memory को ज्यादातर computer applications इस्तमाल करते हैं processes को run करने के लिए. एक बार process complete हो जाये, तब इसका इस्तमाल दुसरे नए processes को run कराने के लिए किया जाता है. इसका इस्तमाल मुख्य रूप से temporary files को store करने के लिए किया जाता है।

जब हम bits को एकसाथ group करते हैं तब उसे computer industry में एक नाम दिया जाता है. ज्यादातर references के तोर से computers number of bytes का इस्तमाल एक measure के तरह करता है computer’s memory (primary storage) capacity और storage (secondary) capacity को लेकर।

Computer memory को partitioned (divided) किया जाता है बहुत सारे number of data containers में जिन्हें की memory cells कहते हैं।

सभी cell एक specific amount of data को ही store कर सकते हैं जिन्हें की word कहा जाता है (उदहरण के लिए 8 bits data का इस्तमाल)

सभी cell में एक associated location identifier होता है जिसे की address कहते हैं।

Data जिन्हें की process किया जाता है, उन्हें coded किया जाता है binary (base-2 number) form में जिसके लिए बहुत से अलग प्रकार के encoding schemes का इस्तमाल होता है, चलिए उनके बारे में आगे discuss करते हैं।

शुरुवात करने के लिए, digits 0 और 1 binary digits होते हैं और प्रत्येक को short में bit कहा जाता है. वहीँ, 0 represent करता है OFF state को और 1 represent करता है ON state को।

अगर n bits किसी cell में हों, और 2n (जिसे की “2 to the power or n”) ways हों जिसमें zeros और ones को arrange किया जाता है, उदहरण के लिए 2 binary digits (either 1 or 0), इसे सभी arrangements (22 or 2×2 or 4) possibilities हो सकते हैं जो की हैं -00, 01, 10 और 11।

किसी computer’s memory की capacity को determine करने के लिए उनके दो पहलूवों को गौर किया जाता है, जो की हैं पहला की कितने number of bits per cell हैं और number of cells जिसमें memory को partitioned किया जाता है, उदहरण के लिए computer memory depend करता है कितने bits प्रत्येक cell में stored हैं और कितने cells available हैं।

Computer industry के हिसाब से sequence of 8-bits (जिसे की byte भी कहा जाता है), यह ही basic unit of memory होती है।

Units for Measuring Memory (Data Storage) Capacity:

डाटा के प्रकार.

Programming की बात करें तब data type को हम कह सकते हैं की, यह एक classification होता है जो की ये specify करता है की किस type की value एक variable के पास है और कोन से प्रकार के mathematical, relational या logical operations उनपर apply किया जायेगा जिससे कोई भी error नहीं होगा।

उदहरण के लिए, एक string ऐसा data type है जिसका इस्तमाल text को classify करने के लिए किया जाता है वहीँ एक integer ऐसा data type है जिसका इस्तमाल whole numbers को classify करने के लिए किया जाता है।

वहीँ इसके अलावा भी कई और प्रकार के data होते हैं. जिनके बारे में मैंने नीचे बताया हुआ है।

संख्यात्मक (Numerical) Data

इस तरह के Data में 0-9 तक की संख्याए यानी Decimal Numbers रहते हैं. Computer में खासकर इसी numerical data का ही इस्तमाल होता है. Excel sheet में हम data के तोर पर numerical data का ही इस्तमाल करते हैं।

अक्षर (Alphabetic) Data

किसी भी तरह की वर्णमाला चाहे Hindi के (क, ख, ग) या इंग्लिश के (A, B, C) हो वो सभी इसी Alphabetic Data के अंतर्गत आते हैं।

चिन्हात्मक (Alpha Numeric) Data

सुनने में जैसा लगता है ठीक वैसे ही ये data में सभी प्रकार के चिन्ह जैसे @, #, $ आदि आते हैं।

ऑडियो Data | ध्वनि (Audio data)

ये Data में सभी प्रकार के गाने, Recording आदि होते हैं जो ऑडियो फॉर्मेट जैसे MP3, WAV, format में इस्तमाल किये जाते हैं।

विडियो Data | चलचित्र (Video data)

इस प्रकार के Data में सभी प्रकार की विडियो होते हैं और वो video format जैसे की MP4, MKV आदि format में इस्तमाल किये जाते हैं।

Graphical Data | रेखाचित्र

इस तरह के Data के अंतर्गत Images, pictures, Graphical Data आदि JPG, PNG format में इस्तमाल किये जाते हैं।

Data Processing क्या है?

चलिए जानते हैं Data Processing क्या है? Data processing एक ऐसा process है जिसमें raw data को meaningful information में convert किया जाता है एक process के माध्यम से. Data को manipulate किया जाता है जिससे वो results produce करे और जिससे एक problem का resolution किया जा सके या कोई मेह्जुदा problem का situation improve किया जा सके।

एक production process के तरह ही ये भी एक cycle का पालन करता है जहाँ पर inputs (raw data) को एक process (computer systems, software, etc.) में डाला जाता है जिससे output (information and insights) produce हो सके।

Data Processing के Basic Stages

Basic stages में मुख्य रूप से तीन steps होते हैं data processing cycle के।

  • Input इस step में, input data को एक convenient form में prepare किया जाता है processing के लिए. ये form processing machine के ऊपर निर्भर करता है. उदहारण के लिए, जब electronic computers का इस्तमाल किया जाता है, तब input data को किसी एक मेह्जुदा medium में store किया जाता जैसे की magnetic disks, tapes, या और कुछ.
  • Processing इस step में, input data को produce data में बदला जाता है जो की ज्यादा useful form होता है. उदहारण के लिए, किसी company में sales की summary calculate करने के लिए sales orders को देखा जाता है.
  • Output इस step में, इसके पूर्व के processing step के result को collect किया जाता है. Output data का कोई particular form इसके ऊपर निर्भर करता है की उस data को किस तरह से इस्तमाल किया जाता है. उदहारण के लिए, output data में कोई employees के pay-checks भी हो सकते हैं.

चलिए अब Data Processing के Basic Stages को Details में समझते हैं

इस input प्रक्रिया में डाटा को collect कर कहीं store किया जाता है. Store का मतलब है की कहीं इकठ्ठा किया जाता है वो चाहे तो आप computer में भी store कर सकते हैं या कोई paper में भी लिख सकते हैं. Input के दुसरे process को समझते हैं।

a) Collection Input करने से पहले हमें data की collection करने की आवश्यकता है. Data को अलग अलग Sources से collect किया जाता है, जैसे एक शहर में कितने schools हैं यह जानने के लिए सभी schools को जाना होता है तथ्य को collect किया जाता है. एक class में कितने student 50% से ज्यादा marks रखे हैं. इस Information को जानने के लिए भी हर student की मार्क sheet collect करने की आवश्यकता है।

b) Verification अब अगला जो step है वो है Verification, जहाँ यह confirm किया जाता है की जो data input के लिए लिया गया है वह सही है या गलत. जैसे जब result PUBLISH करने से पहले सबसे पहले उसे Verify किया जाता है. आप भी किसी को कोई report देने से पहले एक बार verify जरुर करते हैं।

c) Coding इस step में डाटा को Coding किया जाता है, इसका मतलब है उसे Machine form में बदला जाता है यानि की Computer Readable Form में Convert करना. जिसे computer Input data को आगे आसानी से Process कर सके।

d) Storing अब जो data Computer के excel या word में enter किया गया है. उस डाटा को Computer में स्टोर किया जाता है. इसके लिए कोई Storage Device का इस्तेमाल किया जाता है. जब डाटा कंप्यूटर में स्टोर हो जाता है तभी अगला जो step है Processing के लिए भेजा जाता है।

2. Processing

यह वो step हैं जहाँ Information बनाने की प्रक्रिया का आरंभ होता है. यहाँ इन निचे दिए गए सभी Techniques का इस्तेमाल किया जाता है, जैसे की Classification, Sorting, calculation, summarizing।

a) Classification इस प्रक्रिया में, data को समूहों और उपसमूहों में classify किया जाता है. जिससे डाटा को ठीक तरीके से समझने में आसानी होगी. जैसे college में students डाटा को अगर classify करेंगे तो, science श्रेणी के डाटा को अलग, commerce श्रेणी के data को अलग और arts श्रेणी के data को अलग अलग रखेंगे जिसे Data Analysis करने में आसानी होती है।

b) Sorting यहाँ पर data को एक व्यवस्तित order में arrange करके रखा जाता है. जिससे हमें डाटा को access करने में आसानी होगी. Sorting Order कुछ भी हो सोकता है Ascending या Descending. ये user पर निर्भर करता है वो data को किस हिसाब से sort करना चाहता है. जैसे Class में roll number को Alphabetical Order में रखा जाता है. Marks को Highest mark से Lowest Mark।

c) Calculation Calculation Process में दिए गए data के उपर कोई arithmetic Operation को Perform किया जाता है. जैसे वो Operation इनमे से कुछ भी हो सकते हैं sum, average, percentage. EX- एक क्लास में students के average marks कितने हैं, male और female का अनुपात, ये सब calculation Steps में आते हैं. इसके जरिए हमें एक सही summarised information मिलती है।

d) Summarising Input data के ऊपर दिए गए सारे operation Perform करने के बाद एक summarised Report को Produce किया जाता है. कोई Company में मैनेजमेंट को कभी भी पूरी जानकारी नहीं दी जाती, वहां बस शारांस को भेजा जाता है।

ऐसा इसलिए क्यूंकि उनके पास सभी चीज़ों के लिए समय नहीं होता है और इसमें समय की बचत भी होती है. जैसे doctor, बहुत सारे test करने के बाद एक रिपोर्ट देते हैं की इस आदमी को ये बीमारी है. रिपोर्ट कार्ड भी exam result की summary होती है. शायद आप समझ गए होंगे data को Processing के लिए कैसे भेजा जाता है और कैसे होता है।

जब Processing के सभी Steps ख़तम हो जाते है, तब Output result प्राप्त होता है जिसे जानकारी कह सकते हैं. Processing step का एक ही मकसद रहता है सठिक Result निकलना और user को देना. ज्यादातर समय Output इनफार्मेशन को कोई Storage device में Store किया जाता है. जैसे हार्ड डिस्क, pen drive, CD, DVD।

Output (Output Result पे होने वाली गतिविधियों)

a) Retrieval भविस्यत में, जब चाहें तब output result को Storage Media से Retrieve किया जा सकता है. जैसे एक student का 7 semester exam का result जब चाहें तब किसी भी कोई से भी semester का marks देख सकते हैं. इस प्रक्रिया को Retrieval कहते हैं।

b) Conversion Output result को अलग अलग Form में परिवर्तन किया जा सकता है. शायद आप देखे होंगे डाटा को Processing करने के बाद जो Output result प्राप्त होता है उन्हें इनमे से किसी भी रूप में देख सकते हैं जैसे Output Information – Graph, Flowchart, chart, Table, Diagram, Report. India का Population का GRAPH, Population growth chart, College Time table ये सभी Output result के उदहारण हैं।

c) Communication data को processed करने के बाद जो भी output निकलता है, वह एक Information है. जिसे Share करना अति आवश्यक है, जैसे news paper में जो information सबके पास आसानी से पहुँचाना. अगर बात करें College Time table कि जिसको Peon Notice Board पे छापता है।

जिससे ये जानकारी सारे Students को मिले, इसी को Communication कहते हैं. Output result को share करने की प्रक्रिया को Communication कहते हैं. ( आजकल जब से Camera आया है, sharing तो photos को wahtsapp group में डालते ही हो रहा है जैसे time table फोटो, result, Notice )

Data Processing के Methods क्या है?

पुरे विश्व में ये data जितना भी best हो काम नहीं आता जब तक की उसे ठीक तरीके से process न किया जाये. Data processing उस process को कहा जाता है जिसमें की कुछ methods का इस्तमाल कर raw data को usable information में तब्दील कर दिया जाता है.

हाँ इस काम के लिए paper और pencil का उपयोग किया जा सकता है लेकिन चूँकि हम 21st century में हैं और यहाँ पर data की कोई कमी नहीं है, मतलब की data की quantity बहुत ज्यादा है और ऐसे में हमें नए innovation technology जैसे की computer का इस्तमाल कर सकते हैं।

Computer का इस्तमाल data को process करने के लिए उन्हें पहले collect किया जाता है, accuracy के लिए check और भी तभी जाकर उन्हें computer में enter किया जाता है. तो चलिए ऐसे ही कुछ Data processing methods के बारे में जानते हैं।

Batch Processing

Batch processing एक बड़ा ही grunt work होता है, ये data processing का simplest form होता है. ये तब ज्यादा उपयोगी होता है जब किसी organization के large volume of data होते हैं और उन्हें एक या दो categories में clump (एक जगह में) किया जा सके।

उदहारण के लिए एक store में, जहाँ की batch-process के मदद से transactions को एक जगह में categorize किया जा सकता है. अगर कोई information को बदला न जाये तब batch processing बहुत ही fast होता है।

Real-Time Processing

कुछ batch-processing इतने ज्यादा fast भी नहीं होते हैं. Real-time processing methods data को handle करते हैं जब इन्हें instant turn-around time की जरुरत होती है. उदहारण के लिए अगर कोई यात्री airline ticket खरीदता है और उसे cancel भी कर देता है तब airline को अपने records को instantly ही update करना होता है।

इस process से records instantly update हो जाते हैं. जहाँ batch processing में बहुत सारे data को specified time में process करना होता है, वहीँ real-time processing एक continuous process होता है।

Data Mining

Data mining में data multiple sources और pools से लिया जाता है और उन्हें combine कर correlations की तलाश करता है. उदहारण के लिए एक grocery chain को customer के purchase को analyse करना होता है और ये खोजना होता है की customer जो की अनाज खरीदते हैं, अक्सर उसके बाद वो केले ही खरीदते हैं।

तब ये chain इस information का इस्तमाल कर sales को increase कर सकता है, इसलिए sales को बढ़ाने के लिए, ऐसे joint purchases का होना उनके sales लिए काफी अच्छा सिद्ध हो सकता है।

Statistical Processing

Statistical processing में heavy number-crunching होती है. एक company जिनको पता है की वो सप्ताह के एक दिन में थोडा ज्यादा busy होते हैं. ऐसे इसलिए होता है क्यूंकि बहुत से customers आखिरी वक़्त में ही अपने request देते हैं इसलिए ऐसे problem अक्सर होते हैं।

कारण का पता होने से company ऐसे problem से निपट सकते हैं. Statistics के मदद से data को compare करने में आसानी होती है फिर चाहे वो अलग अलग size के companies हों या अलग अलग सहर हों।

Data और Information में अंतर क्या है?

क्या आपको पता है Data और Information में अंतर क्या है?

Memory data वापस लेना है ?

अगर आपके Memory card से डाटा delete हो गया है चाहे वो कोई फोटो हो या कोई गाना हो, इसके लिए आपको computer में पहले recovery software install करना होगा. फिर अपने mobile से memory card को निकालकर उसे computer के साथ connect करना होगा. फिर software को run कर आप अपने delete हुए data को दुबारा प्राप्त कर सकते हैं.

MS dos में save किये हुए data को edit करने का command क्या होगा ?

यदि आपने MS Dos File में कुछ लिखा हुआ है और उसे आप चाहते हैं की कैसे edit करें तब आपको इसके लिए उस document को पहले open करना होगा, ऐसा करते ही आपको उसे edit करने का अवसर मिलेगा. इसे फिर आप बाद में save कर सकते हैं.

Display ख़राब हुआ Mobile का Data कैसे निकले या Laptop से कैसे Connect करे ?

यदि आपका Mobile का display ख़राब हो गया है और आप उसके data को इस्तमाल करना चाहते हैं तब आपको उसे अपने system के साथ connect करना होगा. इसके लिए internet पर बहुत से data recovery software का उपलब्ध है आप उनका इस्तमाल data recovery के लिए कर सकते हैं.

District data assistant का work क्या होता है?

District data assistant (DDA) का काम होता है की district level में जो भी technical काम होते हैं और official काम जिसमें की computer का इस्तमाल ही वो ये assistant करते हैं. साथ में अगर कोई excel के काम, कोई graphs बनाना है, यहाँ तक की बहुत ही official data को categorize करने का काम भी करना होता है.

Data SD Card में कैसे Save करे?

चूँकि phone की phone memory बहुत ही कम होती है इसलिए अक्सर users को data SD card में save करना होता है और साथ में ये data को SD card में transfer भी करना पड़ सकता है. इसलिए google playstore में ऐसे बहुत से apps हैं जिनका इस्तमाल आप data transfer के लिए कर सकते हैं.

Computer के किस भाग से Data Input किया जाता है?

Computer में अगर आपको कुछ input करना है तब आपको input devices का इस्तमाल करना होगा. जैसे की keyboard, mouse, OCR, OMR. इसके साथ अगर आप क्कुह data computer में डालना चाहते हैं तब आप कोई pendrive या CD को insert कर ऐसा कर सकते हैं.

MS Word में Data कैसे Insert करे?

MS Word में data insert करने के लिए insert menu का इस्तमाल कर सकते हैं. इसके लिए आप youtube में MS words को इस्तमाल करने के video बिलकुल ही मुफ्त देख सकते हैं और सीख भी सकते हैं.

Keyboard Data Input करता है उससे क्या कहते हैं?

Keyboard के माध्यम से computer में user data input कर सकता है. ऐसा इसलिए क्यूंकि Keyboard एक input device होता है.

आज आपने क्या सीखा

मुझे आशा है की मैंने आप लोगों को डाटा क्या है (What is Data in Hindi) के बारे में पूरी जानकारी दी और में आशा करता हूँ आप लोगों को Data क्या है के बारे में समझ आ गया होगा।

यदि आपके मन में इस Data in hindi को लेकर कोई भी doubts हैं या आप चाहते हैं की इसमें कुछ सुधार होनी चाहिए तब इसके लिए आप नीच comments लिख सकते हैं. आपके इन्ही विचारों से हमें कुछ सीखने और कुछ सुधारने का मोका मिलेगा।

यदि आपको मेरी यह post डाटा क्या होता है हिंदी में अच्छा लगा हो या इससे आपको कुछ सिखने को मिला हो तब अपनी प्रसन्नता और उत्त्सुकता को दर्शाने के लिए कृपया इस पोस्ट को Social Networks जैसे कि Facebook, Twitter इत्यादि पर share कीजिये।

Related Posts

कंप्यूटर क्या है, चलिए जानते हैं बेसिक जानकारी, कंप्यूटर वायरस क्या है: प्रकार, बचाव के उपाय, और क्या करें, ups क्या है और कैसे काम करता है, windows क्या है और इसकी विशेषताएं, algorithm क्या है और आसानी से कैसे लिखें, leave a comment cancel reply, comments (8).

श्रीमान एडमिन, Subject: quary/request एक सवाल का जवाब ढूंढते हुए मै आपके इस लेख पर आया, सभी जानकारी और आपके बताने का अंदाज बड़ा सहज लगा, किंतु मेरी जिज्ञासा या सवाल का उत्तर यहां भी नही मिल सका। कृपया मार्गदर्शन करें ।

मै दो दिनों से सो नही पा रहा हूं, कृपया तुरंत रिप्लाई देंगे तो बड़ी कृपा होगी आपकी

यदि आप बता सकें कि, यदि सभी Data Facts हैं, तो क्या Audio, Video, Images, Special Characters facts कैसे हैं ?

या कहीं Facts & figures को डेटा बताया जाता है, तब भी Audio, Video, Images, Special Characters, figures हैं या facts और कैसे ?

Om Prakhas ji, Raw facts and Figures ko Data kaha jata hai. Raw yani ki in facts ko abhi tak process nahin kiya gaya hai. ye facts and figures kisi bhi rup mein ho sakte hain jaise ki Audio, Video, Images, Special Characters ityadi. inhe achhe tarike se process kiya jata hai tabhi isse valuabel information prapt hota hai.

no resullet faound

Aapka sawal kya hai ?

row data and big data,open data ,dark data ye sare types ke andar nhi aate kya please conform krna hai

Thanks prabhajan sir we love you

Lajvab information Bhai .. maja aa gya padhke .. gyan m vardhi huii h

this is nyc post sir thanks for the sharing this type of information

Page Statistics

Table of contents.

  • Introduction to Functional Computer
  • Fundamentals of Architectural Design

Data Representation

  • Instruction Set Architecture : Instructions and Formats
  • Instruction Set Architecture : Design Models
  • Instruction Set Architecture : Addressing Modes
  • Performance Measurements and Issues
  • Computer Architecture Assessment 1
  • Fixed Point Arithmetic : Addition and Subtraction
  • Fixed Point Arithmetic : Multiplication
  • Fixed Point Arithmetic : Division
  • Floating Point Arithmetic
  • Arithmetic Logic Unit Design
  • CPU's Data Path
  • CPU's Control Unit
  • Control Unit Design
  • Concepts of Pipelining
  • Computer Architecture Assessment 2
  • Pipeline Hazards
  • Memory Characteristics and Organization
  • Cache Memory
  • Virtual Memory
  • I/O Communication and I/O Controller
  • Input/Output Data Transfer
  • Direct Memory Access controller and I/O Processor
  • CPU Interrupts and Interrupt Handling
  • Computer Architecture Assessment 3

Course Computer Architecture

Digital computers store and process information in binary form as digital logic has only two values "1" and "0" or in other words "True or False" or also said as "ON or OFF". This system is called radix 2. We human generally deal with radix 10 i.e. decimal. As a matter of convenience there are many other representations like Octal (Radix 8), Hexadecimal (Radix 16), Binary coded decimal (BCD), Decimal etc.

Every computer's CPU has a width measured in terms of bits such as 8 bit CPU, 16 bit CPU, 32 bit CPU etc. Similarly, each memory location can store a fixed number of bits and is called memory width. Given the size of the CPU and Memory, it is for the programmer to handle his data representation. Most of the readers may be knowing that 4 bits form a Nibble, 8 bits form a byte. The word length is defined by the Instruction Set Architecture of the CPU. The word length may be equal to the width of the CPU.

The memory simply stores information as a binary pattern of 1's and 0's. It is to be interpreted as what the content of a memory location means. If the CPU is in the Fetch cycle, it interprets the fetched memory content to be instruction and decodes based on Instruction format. In the Execute cycle, the information from memory is considered as data. As a common man using a computer, we think computers handle English or other alphabets, special characters or numbers. A programmer considers memory content to be data types of the programming language he uses. Now recall figure 1.2 and 1.3 of chapter 1 to reinforce your thought that conversion happens from computer user interface to internal representation and storage.

  • Data Representation in Computers

Information handled by a computer is classified as instruction and data. A broad overview of the internal representation of the information is illustrated in figure 3.1. No matter whether it is data in a numeric or non-numeric form or integer, everything is internally represented in Binary. It is up to the programmer to handle the interpretation of the binary pattern and this interpretation is called Data Representation . These data representation schemes are all standardized by international organizations.

Choice of Data representation to be used in a computer is decided by

  • The number types to be represented (integer, real, signed, unsigned, etc.)
  • Range of values likely to be represented (maximum and minimum to be represented)
  • The Precision of the numbers i.e. maximum accuracy of representation (floating point single precision, double precision etc)
  • If non-numeric i.e. character, character representation standard to be chosen. ASCII, EBCDIC, UTF are examples of character representation standards.
  • The hardware support in terms of word width, instruction.

Before we go into the details, let us take an example of interpretation. Say a byte in Memory has value "0011 0001". Although there exists a possibility of so many interpretations as in figure 3.2, the program has only one interpretation as decided by the programmer and declared in the program.

  • Fixed point Number Representation

Fixed point numbers are also known as whole numbers or Integers. The number of bits used in representing the integer also implies the maximum number that can be represented in the system hardware. However for the efficiency of storage and operations, one may choose to represent the integer with one Byte, two Bytes, Four bytes or more. This space allocation is translated from the definition used by the programmer while defining a variable as integer short or long and the Instruction Set Architecture.

In addition to the bit length definition for integers, we also have a choice to represent them as below:

  • Unsigned Integer : A positive number including zero can be represented in this format. All the allotted bits are utilised in defining the number. So if one is using 8 bits to represent the unsigned integer, the range of values that can be represented is 28 i.e. "0" to "255". If 16 bits are used for representing then the range is 216 i.e. "0 to 65535".
  • Signed Integer : In this format negative numbers, zero, and positive numbers can be represented. A sign bit indicates the magnitude direction as positive or negative. There are three possible representations for signed integer and these are Sign Magnitude format, 1's Compliment format and 2's Complement format .

Signed Integer – Sign Magnitude format: Most Significant Bit (MSB) is reserved for indicating the direction of the magnitude (value). A "0" on MSB means a positive number and a "1" on MSB means a negative number. If n bits are used for representation, n-1 bits indicate the absolute value of the number. Examples for n=8:

Examples for n=8:

0010 1111 = + 47 Decimal (Positive number)

1010 1111 = - 47 Decimal (Negative Number)

0111 1110 = +126 (Positive number)

1111 1110 = -126 (Negative Number)

0000 0000 = + 0 (Postive Number)

1000 0000 = - 0 (Negative Number)

Although this method is easy to understand, Sign Magnitude representation has several shortcomings like

  • Zero can be represented in two ways causing redundancy and confusion.
  • The total range for magnitude representation is limited to 2n-1, although n bits were accounted.
  • The separate sign bit makes the addition and subtraction more complicated. Also, comparing two numbers is not straightforward.

Signed Integer – 1’s Complement format: In this format too, MSB is reserved as the sign bit. But the difference is in representing the Magnitude part of the value for negative numbers (magnitude) is inversed and hence called 1’s Complement form. The positive numbers are represented as it is in binary. Let us see some examples to better our understanding.

1101 0000 = - 47 Decimal (Negative Number)

1000 0001 = -126 (Negative Number)

1111 1111 = - 0 (Negative Number)

  • Converting a given binary number to its 2's complement form

Step 1 . -x = x' + 1 where x' is the one's complement of x.

Step 2 Extend the data width of the number, fill up with sign extension i.e. MSB bit is used to fill the bits.

Example: -47 decimal over 8bit representation

As you can see zero is not getting represented with redundancy. There is only one way of representing zero. The other problem of the complexity of the arithmetic operation is also eliminated in 2’s complement representation. Subtraction is done as Addition.

More exercises on number conversion are left to the self-interest of readers.

  • Floating Point Number system

The maximum number at best represented as a whole number is 2 n . In the Scientific world, we do come across numbers like Mass of an Electron is 9.10939 x 10-31 Kg. Velocity of light is 2.99792458 x 108 m/s. Imagine to write the number in a piece of paper without exponent and converting into binary for computer representation. Sure you are tired!!. It makes no sense to write a number in non- readable form or non- processible form. Hence we write such large or small numbers using exponent and mantissa. This is said to be Floating Point representation or real number representation. he real number system could have infinite values between 0 and 1.

Representation in computer

Unlike the two's complement representation for integer numbers, Floating Point number uses Sign and Magnitude representation for both mantissa and exponent . In the number 9.10939 x 1031, in decimal form, +31 is Exponent, 9.10939 is known as Fraction . Mantissa, Significand and fraction are synonymously used terms. In the computer, the representation is binary and the binary point is not fixed. For example, a number, say, 23.345 can be written as 2.3345 x 101 or 0.23345 x 102 or 2334.5 x 10-2. The representation 2.3345 x 101 is said to be in normalised form.

Floating-point numbers usually use multiple words in memory as we need to allot a sign bit, few bits for exponent and many bits for mantissa. There are standards for such allocation which we will see sooner.

  • IEEE 754 Floating Point Representation

We have two standards known as Single Precision and Double Precision from IEEE. These standards enable portability among different computers. Figure 3.3 picturizes Single precision while figure 3.4 picturizes double precision. Single Precision uses 32bit format while double precision is 64 bits word length. As the name suggests double precision can represent fractions with larger accuracy. In both the cases, MSB is sign bit for the mantissa part, followed by Exponent and Mantissa. The exponent part has its sign bit.

It is to be noted that in Single Precision, we can represent an exponent in the range -127 to +127. It is possible as a result of arithmetic operations the resulting exponent may not fit in. This situation is called overflow in the case of positive exponent and underflow in the case of negative exponent. The Double Precision format has 11 bits for exponent meaning a number as large as -1023 to 1023 can be represented. The programmer has to make a choice between Single Precision and Double Precision declaration using his knowledge about the data being handled.

The Floating Point operations on the regular CPU is very very slow. Generally, a special purpose CPU known as Co-processor is used. This Co-processor works in tandem with the main CPU. The programmer should be using the float declaration only if his data is in real number form. Float declaration is not to be used generously.

  • Decimal Numbers Representation

Decimal numbers (radix 10) are represented and processed in the system with the support of additional hardware. We deal with numbers in decimal format in everyday life. Some machines implement decimal arithmetic too, like floating-point arithmetic hardware. In such a case, the CPU uses decimal numbers in BCD (binary coded decimal) form and does BCD arithmetic operation. BCD operates on radix 10. This hardware operates without conversion to pure binary. It uses a nibble to represent a number in packed BCD form. BCD operations require not only special hardware but also decimal instruction set.

  • Exceptions and Error Detection

All of us know that when we do arithmetic operations, we get answers which have more digits than the operands (Ex: 8 x 2= 16). This happens in computer arithmetic operations too. When the result size exceeds the allotted size of the variable or the register, it becomes an error and exception. The exception conditions associated with numbers and number operations are Overflow, Underflow, Truncation, Rounding and Multiple Precision . These are detected by the associated hardware in arithmetic Unit. These exceptions apply to both Fixed Point and Floating Point operations. Each of these exceptional conditions has a flag bit assigned in the Processor Status Word (PSW). We may discuss more in detail in the later chapters.

  • Character Representation

Another data type is non-numeric and is largely character sets. We use a human-understandable character set to communicate with computer i.e. for both input and output. Standard character sets like EBCDIC and ASCII are chosen to represent alphabets, numbers and special characters. Nowadays Unicode standard is also in use for non-English language like Chinese, Hindi, Spanish, etc. These codes are accessible and available on the internet. Interested readers may access and learn more.

1. Track your progress [Earn 200 points]

Mark as complete

2. Provide your ratings to this chapter [Earn 100 points]

डेटा और इनफॉर्मेशन मे क्या अंतर है? (Data and Information in Hindi)

Difference Between Data and Information in Hindi: यदि आप कंप्यूटर साइंस के स्टूडेंट है तो, आपने डेटा (Data) और जानकारी (Information) जैसे शब्दों को जरूर सुना होगा. कंप्यूटर के क्षेत्र से जुड़ा कोई भी व्यक्ति इन दो शब्दों से भलीभांति परिचित रहता है. लेकिन कई बार कंप्यूटर यूजर इंफॉर्मेशन को डाटा और डाटा को इंफॉर्मेशन बोल देते हैं.

Data और Information दो ऐसे शब्द है, जिसे समझने के लिए हमें इनके डेफिनेशन को समझना होगा, तो चलिए इस पोस्ट में हम डाटा और इंफॉर्मेशन के बीच के अंतर (Difference Between Data And Information In Hindi) को विस्तार से समझते हैं.

Table of Contents

Data और Information मे मुख्य क्या अंतर है.

वर्तमान समय में कंप्यूटर का विकास किस तरीके से हो चुका है कि, जो कोई यूजर कीबोर्ड के माध्यम से कंप्यूटर में डाटा इनपुट करता है और आउटपुट के बदले यूजर को रिजल्ट मिलता है, अब सवाल आता है की Data और Information क्या है, नीचे दिए गए टेबल मे समझते हैं कि डाटा इनफार्मेशन से किस प्रकार अलग है.

डेटा (Data) और सूचना (Information) के बीच मुख्य अंतर क्या है?

यदि आपने ऊपर के टेबल में दिए गए डाटा और इंफॉर्मेशन के अंतर को पढ़ा होगा तो आप बहुत ही आसानी से समझ गए होंगे कि डाटा और इंफॉर्मेशन में क्या अंतर है. चलिए अब हम डाटा और इंफॉर्मेशन को थोड़ी विस्तार से जानते और समझते हैं.

जिस प्रकार लोहे की खान से लोहे का अयस्क निकाला जाता है, और उस लोहे के अयस्क को रिफाइंड करके लोहे का रूप दिया जाता है ठीक उसी प्रकार कंप्यूटर में इस्तेमाल होने वाले डेट और इंफॉर्मेशन एक दूसरे के पूरक हैं.

यदि हम आसान भाषा में समझे तो, डाटा एक अयस्क है, जिसे रिफाइन करने के बाद हमें लोहा यानी इंफॉर्मेशन प्राप्त होता है. इस संदर्भ में हम यह समझ सकते हैं कि, लोहा बनाने के लिए अयस्क जरूरी है लेकिन, अयस्क का अपना एक अलग ही रूप है.

Difference Between Data and Information in Hindi

जिस प्रकार अयस्क लोहे का raw material है, ठीक उसी प्रकार डाटा भी इंफॉर्मेशन का raw material है, बेटा हमेशा और अन ऑर्गेनाइज्ड होता है, लेकिन एक इंफॉर्मेशन प्रोसेस के माध्यम से निकाली जाती है इसलिए वह ऑर्गेनाइज होती है.

जिस प्रकार एक लौह अयस्क में कई सारे धातु, मिश्र धातु, मिट्टी जैसे अवांछित तत्व होते हैं उसी प्रकार डाटा में भी कई प्रकार के ऐसे डाटा होते हैं जो बिल्कुल बेकार होते हैं, और उसे डाटा को उपयोगी बनाकर इंफॉर्मेशन मे बदल दिया जाता है.

Data और Information मे अंतर – निष्कर्ष

डाटा और इंफॉर्मेशन का उपयोग ना सिर्फ कंप्यूटर क्षेत्र में बल्कि कई प्रकार के व्यवसाय, स्टॉक मार्केट, बाजार को प्रभावित करने वाले कारक, जनसंख्या की गणना इत्यादि जैसे कार्यों के लिए किया जाता है. हालांकि कंप्यूटर का आविष्कार होने से पहले भी डाटा और इंफॉर्मेशन मनुष्य के लिए काफी उपयोगी होता था.

कंप्यूटर के आविष्कार के बाद प्रोग्राम इन आर्टिफिशियल इंटेलिजेंस टेक्नोलॉजी जैसे कई क्षेत्रों में इसका उपयोग किया जाने लगा, ऊपर की पोस्ट को पढ़कर आपने “Data और Information ” के बारे में बहुत ही महत्वपूर्ण जानकारी प्राप्त की, हमें उम्मीद है कि “डेटा (Data) और जानकारी (Information) के बीच के अंतर” से जुड़ा यह पोस्ट आपके लिए हेल्पफुल और इनफॉर्मेटिव साबित हुआ होगा

इस पोस्ट से जुड़े कोई भी सलाह सुझाव या डाउट हो तो हमें कमेंट बॉक्स में कमेंट करके उसे बताएं, आप कंप्यूटर, टेक्नोलॉजी, मोबाइल और गैजेट से जुड़ी जानकारी प्राप्त करने के लिए हमारे इस ब्लॉग को सब्सक्राइब करें

Share this:

About the author.

प्रिय पाठकों, मैं Ady, Unhindi.com का तकनीकी लेखक एवं सह-संस्थापक हूँ. मै एक Blogger, के साथ-साथ ग्राफिक्स डिजाइनर और डिजिटल मार्केटर भी हूँ. और मैं Internet पर Deep Research करके Computer, Technology, Internet, Make money से जुड़ी नईं-नईं जानकारी लोगो तक blogging के माध्यम से पहुँचाता हूँ. इसलिए इस ब्लॉग को पढ़ने और इस ब्लॉग से जुड़ने के लिए आपका बहुत-बहुत धन्यवाद.

Related Posts

mouse क्या होता है यह कैसे काम करता है- Mouse Full Form

सब कुछ हिन्दी में Full Form In Hindi , Computer and Technology , स्टूडेंट्स स्पेशल- सामान्य जानकारी

mouse क्या होता है यह कैसे काम करता है- Mouse Full Form

Mouse के बिना Computer चलाये ? How to Use Keyboard as a Mouse in Hindi

सब कुछ हिन्दी में Computer and Technology

Mouse के बिना Computer चलाये ? How to Use Keyboard as a Mouse in Hindi

MS Excel shortcut keys hindi PDF

MS Excel shortcut keys hindi PDF

What is HTTP status codes in Hindi

सब कुछ हिन्दी में Kya Kaise , Computer and Technology

What is HTTP status codes in Hindi

Comments (3).

thanks to share such a informative post

valuable post about data and information

data aur information ke baare me informative post lagi

Leave a Reply Cancel Reply

अगली बार जब मैं टिप्पणी करूँ, तो इस ब्राउज़र में मेरा नाम, ईमेल और वेबसाइट सहेजें।

This site uses Akismet to reduce spam. Learn how your comment data is processed .

DMCA.com Protection Status

Hinkhoj

  • Hindi to English
  • English to Hindi
  • Spell Checker

Representation मीनिंग : Meaning of Representation in Hindi - Definition and Translation

Hinkhoj

  • हिन्दी से अंग्रेजी
  • Spell Check
  • representation Meaning
  • Similar words
  • Opposite words
  • Sentence Usages

REPRESENTATION MEANING IN HINDI - EXACT MATCHES

sound icon

OTHER RELATED WORDS

Definition of representation.

  • a presentation to the mind in the form of an idea or image
  • a creation that is a visual or tangible rendering of someone or something
  • the act of representing; standing in for someone or some group and speaking with authority in their behalf

RELATED SIMILAR WORDS (Synonyms):

Information provided about representation:.

Representation meaning in Hindi : Get meaning and translation of Representation in Hindi language with grammar,antonyms,synonyms and sentence usages by ShabdKhoj. Know answer of question : what is meaning of Representation in Hindi? Representation ka matalab hindi me kya hai (Representation का हिंदी में मतलब ). Representation meaning in Hindi (हिन्दी मे मीनिंग ) is अभ्यावेदन.English definition of Representation : a presentation to the mind in the form of an idea or image

Explore ShabdKhoj

ShabdKhoj Type

Advertisements

Meaning summary.

Synonym/Similar Words : agency , theatrical performance , theatrical , diagram , histrionics , internal representation , mental representation , delegacy

👇 SHARE MEANING 👇

SHABDKOSH

English Hindi Dictionary | अंग्रेज़ी हिन्दी शब्दकोश

The keyboard uses the ISCII layout developed by the Government of India. It is also used in Windows, Apple and other systems. There is a base layout, and an alternative layout when the Shift key is pressed. If you have any questions about it, please contact us.

  • Pronunciation
  • Word Network
  • Conjugation
  • Inflections
  • More matches
  • Word Finder

representation - Meaning in Hindi

Representation word forms & inflections, definitions and meaning of representation in english, representation noun.

  • internal representation , mental representation
  • "certain representations were made concerning police brutality"

histrionics , theatrical performance , theatrical

  • "the sales contract contains several representations by the vendor"
  • "a Congressional vacancy occurred in the representation from California"
  • agency , delegacy

Synonyms of representation

  • histrionics , theatrical , theatrical performance

More matches for representation

What is another word for representation ?

Sentences with the word representation

Words that rhyme with representation

English Hindi Translator

Words starting with

What is representation meaning in hindi.

The word or phrase representation refers to an activity that stands as an equivalent of something or results in an equivalent, or the act of representing; standing in for someone or some group and speaking with authority in their behalf, or a creation that is a visual or tangible rendering of someone or something, or the right of being represented by delegates who have a voice in some legislative body, or a presentation to the mind in the form of an idea or image, or a statement of facts and reasons made in appealing or protesting. See representation meaning in Hindi , representation definition, translation and meaning of representation in Hindi. Find representation similar words, representation synonyms. Learn and practice the pronunciation of representation. Find the answer of what is the meaning of representation in Hindi. देखें representation का हिन्दी मतलब, representation का मीनिंग, representation का हिन्दी अर्थ, representation का हिन्दी अनुवाद।

Tags for the entry "representation"

What is representation meaning in Hindi, representation translation in Hindi, representation definition, pronunciations and examples of representation in Hindi. representation का हिन्दी मीनिंग, representation का हिन्दी अर्थ, representation का हिन्दी अनुवाद

SHABDKOSH Apps

Download SHABDKOSH Apps for Android and iOS

Ad-free experience & much more

data representation word in hindi

Using simple present tense

data representation word in hindi

Reasons to learn an Indian language

data representation word in hindi

Punctuation rules

Our Apps are nice too!

Dictionary. Translation. Vocabulary. Games. Quotes. Forums. Lists. And more...

data representation word in hindi

Vocabulary & Quizzes

Try our vocabulary lists and quizzes.

Vocabulary Lists

We provide a facility to save words in lists.

Basic Word Lists

Custom word lists.

You can create your own lists to words based on topics.

Login/Register

To manage lists, a member account is necessary.

Share with friends

Social sign-in.

data representation word in hindi

Translation

SHABDKOSH Logo

If you want to access full services of shabdkosh.com

Please help Us by disabling your ad blockers.

or try our SHABDKOSH Premium for ads free experience.

Steps to disable Ads Blockers.

  • Click on ad blocker extension icon from browser's toolbar.
  • Choose the option that disables or pauses Ad blocker on this page.
  • Refresh the page.

Spelling Bee

Hear the words in multiple accents and then enter the spelling. The games gets challenging as you succeed and gets easier if you find the words not so easy.

The game will show the clue or a hint to describe the word which you have to guess. It’s our way of making the classic hangman game!

Antonym Match

Choose the right opposite word from a choice of four possible words. We have thousand of antonym words to play!

View this site in -

Language resources, get our apps, keep in touch.

  • © 2024 SHABDKOSH.COM, All Rights Reserved.
  • Terms of Use
  • Privacy Policy

Liked Words

Shabdkosh Premium

Try SHABDKOSH Premium and get

  • Ad free experience.
  • No limit on translation.
  • Bilingual synonyms translations.
  • Access to all Vocabulary Lists and Quizzes.
  • Copy meanings.

Already a Premium user?

Hindi translation of 'representation'

  • representation

Youtube video

Examples of 'representation' in a sentence representation

English Quiz

Trends of representation

View usage for: All Years Last 10 years Last 50 years Last 100 years Last 300 years

In other languages representation

  • American English : representation / rɛprɪzɛnˈteɪʃən /
  • Brazilian Portuguese : representação
  • Chinese : 代表
  • European Spanish : representación
  • French : représentation
  • German : Vertretung
  • Italian : rappresentanza
  • Japanese : 代表
  • Korean : 대표
  • European Portuguese : representação
  • Latin American Spanish : representación
  • Thai : การมีตัวแทน

Browse alphabetically representation

  • repossession
  • representative
  • All ENGLISH words that begin with 'R'

Quick word challenge

Quiz Review

Score: 0 / 5

Image

Wordle Helper

Tile

Scrabble Tools

Image

Cambridge Dictionary

  • Cambridge Dictionary +Plus

Translation of represent – English–Hindi dictionary

Your browser doesn't support HTML5 audio

represent verb ( ACT FOR )

  • All the local churches were represented at the memorial service .
  • All the nations of the world will be represented at the conference .
  • A group of four teachers were delegated to represent the school at the union conference .
  • They purport to represent the wishes of the majority of parents at the school .
  • A friend of the victim was subpoenaed as a witness by lawyers representing the accused .

represent verb ( DESCRIBE )

  • ¼ and 0.25 are different ways of representing the same fraction .
  • The wild cards are represented here by asterisks.
  • The decimal system represents numbers in terms of groups of ten.
  • Each number on the scale represents twice the speed of the preceding number .
  • Writers of realist novels try to represent life as it is.

represent verb ( BE )

  • The course represents excellent value for money .
  • This huge , unfinished building represents the last hurrah of the former regime .
  • The new price represents a saving of more than 40 percent .
  • This new policy represents a change of direction for the government .
  • Her father's blessing represented a bestowal of consent upon her marriage .

(Translation of represent from the Cambridge English–Hindi Dictionary © Cambridge University Press)

Examples of represent

Translations of represent.

Get a quick, free translation!

{{randomImageQuizHook.quizId}}

Word of the Day

If you are on hold when using the phone, you are waiting to speak to someone.

Searching out and tracking down: talking about finding or discovering things

Searching out and tracking down: talking about finding or discovering things

data representation word in hindi

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists
  • represent (ACT FOR)
  • represent (DESCRIBE)
  • represent (BE)
  • Translations
  • All translations

To add represent to a word list please sign up or log in.

Add represent to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 13 May 2024

Representation of internal speech by single neurons in human supramarginal gyrus

  • Sarah K. Wandelt   ORCID: orcid.org/0000-0001-9551-8491 1 , 2 ,
  • David A. Bjånes 1 , 2 , 3 ,
  • Kelsie Pejsa 1 , 2 ,
  • Brian Lee 1 , 4 , 5 ,
  • Charles Liu   ORCID: orcid.org/0000-0001-6423-8577 1 , 3 , 4 , 5 &
  • Richard A. Andersen 1 , 2  

Nature Human Behaviour ( 2024 ) Cite this article

3552 Accesses

1 Citations

268 Altmetric

Metrics details

  • Brain–machine interface
  • Neural decoding

Speech brain–machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.

Similar content being viewed by others

data representation word in hindi

Online speech synthesis using a chronically implanted brain–computer interface in an individual with ALS

data representation word in hindi

A high-performance speech neuroprosthesis

data representation word in hindi

The speech neuroprosthesis

Speech is one of the most basic forms of human communication, a natural and intuitive way for humans to express their thoughts and desires. Neurological diseases like amyotrophic lateral sclerosis (ALS) and brain lesions can lead to the loss of this ability. In the most severe cases, patients who experience full-body paralysis might be left without any means of communication. Patients with ALS self-report loss of speech as their most serious concern 1 . Brain–machine interfaces (BMIs) are devices offering a promising technological path to bypass neurological impairment by recording neural activity directly from the cortex. Cognitive BMIs have demonstrated potential to restore independence to participants with tetraplegia by reading out movement intent directly from the brain 2 , 3 , 4 , 5 . Similarly, reading out internal (also reported as inner, imagined or covert) speech signals could allow the restoration of communication to people who have lost it.

Decoding speech signals directly from the brain presents its own unique challenges. While non-invasive recording methods such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG) or magnetoencephalography 6 are important tools to locate speech and internal speech production, they lack the necessary temporal and spatial resolution, adequate signal-to-noise ratio or portability for building an online speech BMI 7 , 8 , 9 . For example, state-of-the-art EEG-based imagined speech decoding performances in 2022 ranged from approximately 60% to 80% binary classification 10 . Intracortical electrophysiological recordings have higher signal-to-noise ratios and excellent temporal resolution 11 and are a more suitable choice for an internal speech decoding device.

Invasive speech decoding has predominantly been attempted with electrocorticography (ECoG) 9 or stereo-electroencephalographic depth arrays 12 , as they allow sampling neural activity from different parts of the brain simultaneously. Impressive results in vocalized and attempted speech decoding and reconstruction have been achieved using these techniques 13 , 14 , 15 , 16 , 17 , 18 . However, vocalized speech has also been decoded from localized regions of the cortex. In 2009, the use of a neurotrophic electrode 19 demonstrated real-time speech synthesis from the motor cortex. More recently, speech neuroprosthetics were built from small-scale microelectrode arrays located in the motor cortex 20 , 21 , premotor cortex 22 and supramarginal gyrus (SMG) 23 , demonstrating that vocalized speech BMIs can be built using neural signals from localized regions of cortex.

While important advances in vocalized speech 16 , attempted speech 18 and mimed speech 17 , 22 , 24 , 25 , 26 decoding have been made, highly accurate internal speech decoding has not been achieved. Lack of behavioural output, lower signal-to-noise ratio and differences in cortical activations compared with vocalized speech are speculated to contribute to lower classification accuracies of internal speech 7 , 8 , 13 , 27 , 28 . In ref. 29 , patients implanted with ECoG grids over frontal, parietal and temporal regions silently read or vocalized written words from a screen. They significantly decoded vowels (37.5%) and consonants (36.3%) from internal speech (chance level 25%). Ikeda et al. 30 decoded three internally spoken vowels using ECoG arrays using frequencies in the beta band, with up to 55.6% accuracy from the Broca area (chance level 33%). Using the same recording technology, ref. 31 investigated the decoding of six words during internal speech. The authors demonstrated an average pair-wise classification accuracy of 58%, reaching 88% for the highest pair (chance level 50%). These studies were so-called open-loop experiments, in which the data were analysed offline after acquisition. A recent paper demonstrated real-time (closed-loop) speech decoding using stereotactic depth electrodes 32 . The results were encouraging as internal speech could be detected; however, the reconstructed audio was not discernable and required audible speech to train the decoding model.

While, to our knowledge, internal speech has not previously been decoded from SMG, evidence for internal speech representation in the SMG exists. A review of 100 fMRI studies 33 not only described SMG activity during speech production but also suggested its involvement in subvocal speech 34 , 35 . Similarly, an ECoG study identified high-frequency SMG modulation during vocalized and internal speech 36 . Additionally, fMRI studies have demonstrated SMG involvement in phonologic processing, for instance, during tasks while participants reported whether two words rhyme 37 . Performing such tasks requires the participant to internally ‘hear’ the word, indicating potential internal speech representation 38 . Furthermore, a study performed in people suffering from aphasia found that lesions in the SMG and its adjacent white matter affected inner speech rhyming tasks 39 . Recently, ref. 16 showed that electrode grids over SMG contributed to vocalized speech decoding. Finally, vocalized grasps and colour words were decodable from SMG from one of the same participants involved in this work 23 . These studies provide evidence for the possibility of an internal speech decoder from neural activity in the SMG.

The relationship between inner speech and vocalized speech is still debated. The general consensus posits similarities between internal and vocalized speech processes 36 , but the degree of overlap is not well understood 8 , 35 , 40 , 41 , 42 . Characterizing similarities between vocalized and internal speech could provide evidence that results found with vocalized speech could translate to internal speech. However, such a relationship may not be guaranteed. For instance, some brain areas involved in vocalized speech might be poor candidates for internal speech decoding.

In this Article, two participants with tetraplegia performed internal and vocalized speech of eight words while neurophysiological responses were captured from two implant sites. To investigate neural semantic and phonetic representation, the words were composed of six lexical words and two pseudowords (words that mimic real words without semantic meaning). We examined representations of various language processes at the single-neuron level using recording microelectrode arrays from the SMG located in the posterior parietal cortex (PPC) and the arm and/or hand regions of the primary somatosensory cortex (S1). S1 served as a control for movement, due to emerging evidence of its activation beyond defined regions of interest 43 , 44 . Words were presented with an auditory or a written cue and were produced internally as well as orally. We hypothesized that SMG and S1 activity would modulate during vocalized speech and that SMG activity would modulate during internal speech. Shared representation between internal speech, vocalized speech, auditory comprehension and word reading processes was investigated.

Task design

We characterized neural representations of four different language processes within a population of SMG and S1 neurons: auditory comprehension, word reading, internal speech and vocalized speech production. In this manuscript, internal speech refers to engaging a prompted word internally (‘inner monologue’), without correlated motor output, while vocalized speech refers to audibly vocalizing a prompted word. Participants were implanted in the SMG and S1 on the basis of grasp localization fMRI tasks (Fig. 1 ).

figure 1

a , b , SMG implant locations in participant 1 (1 × 96 multielectrode array) ( a ) and participant 2 (1 × 64 multielectrode array) ( b ). c , d , S1 implant locations in participant 1 (2 × 96 multielectrode arrays) ( c ) and participant 2 (2 × 64 multielectrode arrays) ( d ).

The task contained six phases: an inter-trial interval (ITI), a cue phase (cue), a first delay (D1), an internal speech phase (internal), a second delay (D2) and a vocalized speech phase (speech). Words were cued with either an auditory or a written version of the word (Fig. 2a ). Six of the words were informed by ref. 31 (battlefield, cowboy, python, spoon, swimming and telephone). Two pseudowords (nifzig and bindip) were added to explore phonetic representation in the SMG. The first participant completed ten session days, composed of both the auditory and the written cue tasks. The second participant completed nine sessions, focusing only on the written cue task. The participants were instructed to internally say the cued word during the internal speech phase and to vocalize the same word during the speech phase.

figure 2

a , Written words and sounds were used to cue six words and two pseudowords in a participant with tetraplegia. The ‘audio cue’ task was composed of an ITI, a cue phase during which the sound of one of the words was emitted from a speaker (between 842 and 1,130 ms), a first delay (D1), an internal speech phase, a second delay (D2) and a vocalized speech phase. The ‘written cue’ task was identical to the ‘audio cue’ task, except that written words appeared on the screen for 1.5 s. Eight repetitions of eight words were performed per session day and per task for the first participant. For the second participant, 16 repetitions of eight words were performed for the written cue task. b – e , Example smoothed firing rates of neurons tuned to four words in the SMG for participant 1 (auditory cue, python ( b ), and written cue, telephone ( c )) and participant 2 (written cue, nifzig ( d ), and written cue, spoon ( e )). Top: the average firing rate over 8 or 16 trials (solid line, mean; shaded area, 95% bootstrapped confidence interval). Bottom: one example trial with associated audio amplitude (grey). Vertically dashed lines indicate the beginning of each phase. Single neurons modulate firing rate during internal speech in the SMG.

For each of the four language processes, we observed selective modulation of individual neurons’ firing rates (Fig. 2b–e ). In general, the firing rates of neurons increased during the active phases (cue, internal and speech) and decreased during the rest phases (ITI, D1 and D2). A variety of activation patterns were present in the neural population. Example neurons were selected to demonstrate increases in firing rates during internal speech, cue and vocalized speech. Both the auditory (Fig. 2b ) and the written cue (Fig. 2c–e ) evoked highly modulated firing rates of individual neurons during internal speech.

These stereotypical activation patterns were evident at the single-trial level (Fig. 2b–e , bottom). When the auditory recording was overlaid with firing rates from a single trial, a heterogeneous neural response was observed (Supplementary Fig. 1a ), with some SMG neurons preceding or lagging peak auditory levels during vocalized speech. In contrast, neural activity from primary sensory cortex (S1) only modulated during vocalized speech and produced similar firing patterns regardless of the vocalized word (Supplementary Fig. 1b ).

Population activity represented selective tuning for individual words

Population analysis in the SMG mirrored single-neuron patterns of activation, showing increases in tuning during the active task phases (Fig. 3a,d ). Tuning of a neuron to a word was determined by fitting a linear regression model to the firing rate in 50-ms time bins ( Methods ). Distinctions between participant 1 and participant 2 were observed. Specifically, participant 1 exhibited strong tuning, whereas the number of tuned units was notably lower in participant 2. Based on these findings, we exclusively ran the written cue task with participant number 2. In participant 1, representation of the auditory cue was lower compared with the written cue (Fig. 3b , cue). However, this difference was not observed for other task phases. In both participants, the tuned population activity in S1 increased during vocalized speech but not during the cue and internal speech phases (Supplementary Fig. 3a,b ).

figure 3

a , The average percentage of tuned neurons to words in 50-ms time bins in the SMG over the trial duration for ‘auditory cue’ (blue) and ‘written cue’ (green) tasks for participant 1 (solid line, mean over ten sessions; shaded area, 95% confidence interval of the mean). During the cue phase of auditory trials, neural data were aligned to audio onset, which occurred within 200–650 ms following initiation of the cue phase. b , The average percentage of tuned neurons computed on firing rates per task phase, with 95% confidence interval over ten sessions. Tuning during action phases (cue, internal and speech) following rest phases (ITI, D1 and D2) was significantly higher (paired two-tailed t -test, d.f. 9, P ITI_CueWritten  < 0.001, Cohen’s d  = 2.31; P ITI_CueAuditory  = 0.003, Cohen’s d  = 1.25; P D1_InternalWritten  = 0.008, Cohen’s d  = 1.08; P D1_InternalAuditory  < 0.001, Cohen’s d  = 1.71; P D2_SpeechWritten  < 0.001, Cohen’s d  = 2.34; P D2_SpeechAuditory  < 0.001, Cohen’s d  = 3.23). c , The number of neurons tuned to each individual word in each phase for the ‘auditory cue’ and ‘written cue’ tasks. d , The average percentage of tuned neurons to words in 50-ms time bins in the SMG over the trial duration for ‘written cue’ (green) tasks for participant 2 (solid line, mean over nine sessions; shaded area, 95% confidence interval of the mean). Due to a reduced number of tuned units, only the ‘written cue’ task variation was performed. e , The average percentage of tuned neurons computed on firing rates per task phase, with 95% confidence interval over nine sessions. Tuning during cue and internal phases following rest phases ITI and D1 was significantly higher (paired two-tailed t -test, d.f. 8, P ITI_CueWritten  = 0.003, Cohen’s d  = 1.38; P D1_Internal  = 0.001, Cohen’s d  = 1.67). f , The number of neurons tuned to each individual word in each phase for the ‘written cue’ task.

Source data

To quantitatively compare activity between phases, we assessed the differential response patterns for individual words by examining the variations in average firing rate across different task phases (Fig. 3b,e ). In both participants, tuning during the cue and internal speech phases was significantly higher compared with their preceding rest phases ITI and D1 (paired t -test between phases. Participant 1: d.f. 9, P ITI_CueWritten  < 0.001, Cohen’s d  = 2.31; P ITI_CueAuditory  = 0.003, Cohen’s d  = 1.25; P D1_InternalWritten  = 0.008, Cohen’s d  = 1.08; P D1_InternalAuditory  < 0.001, Cohen’s d  = 1.71. Participant 2: d.f. 8, P ITI_CueWritten  = 0.003, Cohen’s d  = 1.38; P D1_Internal  = 0.001, Cohen’s d  = 1.67). For participant 1, we also observed significantly higher tuning to vocalized speech than to tuning in D2 (d.f. 9, P D2_SpeechWritten  < 0.001, Cohen’s d  = 2.34; P D2_SpeechAuditory  < 0.001, Cohen’s d  = 3.23). Representation for all words was observed in each phase, including pseudowords (bindip and nifzig) (Fig. 3c,f ). To identify neurons with selective activity for unique words, we performed a Kruskal–Wallis test (Supplementary Fig. 3c,d ). The results mirrored findings of the regression analysis in both participants, albeit weaker in participant 2. These findings suggest that, while neural activity during active phases differed from activity during the ITI phase, neural responses of only a few neurons varied across different words for participant 2.

The neural population in the SMG simultaneously represented several distinct aspects of language processing: temporal changes, input modality (auditory, written for participant 1) and unique words from our vocabulary list. We used demixed principal component analysis (dPCA) to decompose and analyse contributions of each individual component: timing, cue modality and word. In Fig. 4 , demixed principal components (PCs) explaining the highest amount of variance were plotted by projecting data onto their respective dPCA decoder axis.

figure 4

a – e , dPCA was performed to investigate variance within three marginalizations: ‘timing’, ‘cue modality’ and ‘word’ for participant 1 ( a – c ) and ‘timing’ and ‘word’ for participant 2 ( d and e ). Demixed PCs explaining the highest variance within each marginalization were plotted over time, by projecting the data onto their respective dPCA decoder axis. In a , the ‘timing’ marginalization demonstrates SMG modulation during cue, internal speech and vocalized speech, while S1 only represents vocalized speech. The solid blue lines (8) represent the auditory cue trials, and dashed green lines (8) represent written cue trials. In b , the ‘cue modality’ marginalization suggests that internal and vocalized speech representation in the SMG are not affected by the cue modality. The solid blue lines (8) represent the auditory cue trials, and dashed green lines (8) represent written cue trials. In c , the ‘word’ marginalization shows high variability for different words in the SMG, but near zero for S1. The colours (8) represent individual words. For each colour, solid lines represent auditory trials and dashed lines represent written cue trials. d is the same as a , but for participant 2. The dashed green lines (8) represent written cue trials. e is the same as c , but for participant 2. The colours (8) represent individual words during written cue trials. The variance for different words in the SMG (left) was higher than in S1 (right), but lower in comparison with SMG in participant 1 ( c ).

For participant 1, the ‘timing’ component revealed that temporal dynamics in the SMG peaked during all active phases (Fig. 4a ). In contrast, temporal S1 modulation peaked only during vocalized speech production, indicating a lack of synchronized lip and face movement of the participant during the other task phases. While ‘cue modality’ components were separable during the cue phase (Fig. 4b ), they overlapped during subsequent phases. Thus, internal and vocalized speech representation may not be influenced by the cue modality. Pseudowords had similar separability to lexical words (Fig. 4c ). The explained variance between words was high in the SMG and was close to zero in S1. In participant 2, temporal dynamics of the task were preserved (‘timing’ component). However, variance to words was reduced, suggesting lower neuronal ability to represent individual words in participant 2. In S1, the results mirrored findings from S1 in participant 1 (Fig. 4d,e , right).

Internal speech is decodable in the SMG

Separable neural representations of both internal and vocalized speech processes implicate SMG as a rich source of neural activity for real-time speech BMI devices. The decodability of words correlated with the percentage of tuned neurons (Fig. 3a–f ) as well as the explained dPCA variance (Fig. 4c,e ) observed in the participants. In participant 1, all words in our vocabulary list were highly decodable, averaging 55% offline decoding and 79% (16–20 training trials) online decoding from neurons during internal speech (Fig. 5a,b ). Words spoken during the vocalized phase were also highly discriminable, averaging 74% offline (Fig. 5a ). In participant 2, offline internal speech decoding averaged 24% (Supplementary Fig. 4b ) and online decoding averaged 23% (Fig. 5a ), with preferential representation of words ‘spoon’ and ‘swimming’.

figure 5

a , Offline decoding accuracies: ‘audio cue’ and ‘written cue’ task data were combined for each individual session day, and leave-one-out CV was performed (black dots). PCA was performed on the training data, an LDA model was constructed, and classification accuracies were plotted with 95% confidence intervals, over the session means. The significance of classification accuracies were evaluated by comparing results with a shuffled distribution (averaged shuffle results over 100 repetitions indicated by red dots; P  < 0.01 indicates that the average mean is >99.5th percentile of shuffle distribution, n  = 10). In participant 1, classification accuracies during action phases (cue, internal and speech) following rest phases (ITI, D1 and D2) were significantly higher (paired two-tailed t -test: n  = 10, d.f. 9, for all P  < 0.001, Cohen’s d  = 6.81, 2.29 and 5.75). b , Online decoding accuracies: classification accuracies for internal speech were evaluated in a closed-loop internal speech BMI application on three different session days for both participants. In participant 1, decoding accuracies were significantly above chance (averaged shuffle results over 1,000 repetitions indicated by red dots; P  < 0.001 indicates that the average mean is >99.95th percentile of shuffle distribution) and improved when 16–20 trials per words were used to train the model (two-sample two-tailed t -test, n (8–14)  = 8, d.f. 11, n (16–20)  = 5, P  = 0.029), averaging 79% classification accuracy. In participant 2, online decoding accuracies were significant (averaged shuffle results over 1,000 repetitions indicated by red dots; P  < 0.05 indicates that average mean is >97.5th percentile of shuffle distribution, n  = 7) and averaged 23%. c , An offline confusion matrix for participant 1: confusion matrices for each of the different task phases were computed on the tested data and averaged over all session days. d , An online confusion matrix: a confusion matrix was computed combining all online runs, leading to a total of 304 trials (38 trials per word) for participant 1 and 448 online trials for participant 2. Participant 1 displayed comparable online decoding accuracies for all words, while participant 2 had preferential decoding for the words ‘swimming’ and ‘spoon’.

In participant 1, trial data from both types of cue (auditory and written) were concatenated for offline analysis, since SMG activity was only differentiable between the types of cue during the cue phase (Figs. 3a and 4b ). This resulted in 16 trials per condition. Features were selected via principal component analysis (PCA) on the training dataset, and PCs that explained 95% of the variance were kept. A linear discriminant analysis (LDA) model was evaluated with leave-one-out cross-validation (CV). Significance was computed by comparing results with a null distribution ( Methods ).

Significant word decoding was observed during all phases, except during the ITI (Fig. 5a , n  = 10, mean decoding value above 99.5th percentile of shuffle distribution is P  < 0.01, per phase, Cohen’s d  = 0.64, 6.17, 3.04, 6.59, 3.93 and 8.26, confidence interval of the mean ± 1.73, 4.46, 5.21, 5.67, 4.63 and 6.49). Decoding accuracies were significantly higher in the cue, internal speech and speech condition, compared with rest phases ITI, D1 and D2 (Fig. 5a , paired t -test, n  = 10, d.f. 9, for all P  < 0.001, Cohen’s d  = 6.81, 2.29 and 5.75). Significant cue phase decoding suggested that modality-independent linguistic representations were present early within the task 45 . Internal speech decoding averaged 55% offline, with the highest session at 72% and a chance level of ~12.5% (Fig. 5a , red line). Vocalized speech averaged even higher, at 74%. All words were highly decodable (Fig. 5c ). As suggested from our dPCA results, individual words were not significantly decodable from neural activity in S1 (Supplementary Fig. 4a ), indicating generalized activity for vocalized speech in the S1 arm region (Fig. 4c ).

For participant 2, SMG significant word decoding was observed during the cue, internal and vocalized speech phases (Supplementary Fig. 4b , n  = 9, mean decoding value above 97.5th/99.5th percentile of shuffle distribution is P  < 0.05/ P  < 0.01, per phase Cohen’s d  = 0.35, 1.15, 1.09, 1.44, 0.99 and 1.49, confidence interval of the mean ± 3.09, 5.02, 6.91, 8.14, 5.45 and 4.15). Decoding accuracies were significantly higher in the cue and internal speech condition, compared with rest phases ITI and D1 (Supplementary Fig. 4b , paired t -test, n  = 9, d.f. 8, P ITI_Cue  = 0.013, Cohen’s d  = 1.07, P D1_Internal  = 0.01, Cohen’s d  = 1.11). S1 decoding mirrored results in participant 1, suggesting that no synchronized face movements occurred during the cue phase or internal speech phase (Supplementary Fig. 4c ).

High-accuracy online speech decoder

We developed an online, closed-loop internal speech BMI using an eight-word vocabulary (Fig. 5b ). On three separate session days, training datasets were generated using the written cue task, with eight repetitions of each word for each participant. An LDA model was trained on the internal speech data of the training set, corresponding to only 1.5 s of neural data per repetition for each class. The trained decoder predicted internal speech during the online task. During the online task, the vocalized speech phase was replaced with a feedback phase. The decoded word was shown in green if correctly decoded, and in red if wrongly decoded (Supplementary Video 1 ). The classifier was retrained after each run of the online task, adding the newly recorded data. Several online runs were performed on each session day, corresponding to different datapoints on Fig. 5b . When using between 8 and 14 repetitions per words to train the decoding model, an average of 59% classification accuracy was obtained for participant 1. Accuracies were significantly higher (two-sample two-tailed t -test, n (8–14)  = 8, n (16–20)  = 5, d.f. 11, P  = 0.029) the more data were added to train the model, obtaining an average of 79% classification accuracy with 16–20 repetitions per word. The highest single run accuracy was 91%. All words were well represented, illustrated by a confusion matrix of 304 trials (Fig. 5d ). In participant 2, decoding was statistically significant, but lower compared with participant 1. The lower number of tuned units (Fig. 3a–f ) and reduced explained variance between words (Fig. 4e , left) could account for these findings. Additionally, preferential representation of words ‘spoon’ and ‘swimming’ was observed.

Shared representations between internal speech, written words and vocalized speech

Different language processes are engaged during the task: auditory comprehension or visual word recognition during the cue phase, and internal speech and vocalized speech production during the speech phases. It has been widely assumed that each of these processes is part of a highly distributed network, involving multiple cortical areas 46 . In this work, we observed significant representation of different language processes in a common cortical region, SMG, in our participants. To explore the relationships between each of these processes, for participant 1 we used cross-phase classification to identify the distinct and common neural codes separately in the auditory and written cue datasets. By training our classifier on the representation found in one phase (for example, the cue phase) and testing the classifier on another phase (for example, internal speech), we quantified generalizability of our models across neural activity of different language processes (Fig. 6 ). The generalizability of a model to different task phases was evaluated through paired t -tests. No significant difference between classification accuracies indicates good generalization of the model, while significantly lower classification accuracies suggest poor generalization of the model.

figure 6

a , Evaluating the overlap of shared information between different task phases in the ‘auditory cue’ task. For each of the ten session days, cross-phase classification was performed. It consisted in training a model on a subset of data from one phase (for example, cue) and applying it on a subset of data from ITI, cue, internal and speech phases. This analysis was performed separately for each task phase. PCA was performed on the training data, an LDA model was constructed and classification accuracies were plotted with a 95% confidence interval over session means. Significant differences in performance between phases were evaluated between the ten sessions (paired two-tailed t -test, FDR corrected, d.f. 9, P  < 0.001 for all, Cohen’s d  ≥ 1.89). For easier visibility, significant differences between ITI and other phases were not plotted. b , Same as a for the ‘written cue’ task (paired two-tailed t -test, FDR corrected, d.f. 9, P Cue_Internal  = 0.028, Cohen’s d  > 0.86; P Cue_Speech  = 0.022, Cohen’s d  = 0.95; all others P  < 0.001 and Cohen’s d  ≥ 1.65). c , The percentage of neurons tuned during the internal speech phase that are also tuned during the vocalized speech phase. Neurons tuned during the internal speech phase were computed as in Fig. 3b separately for each session day. From these, the percentage of neurons that were also tuned during vocalized speech was calculated. More than 80% of neurons during internal speech were also tuned during vocalized speech (82% in the ‘auditory cue’ task, 85% in the ‘written cue’ task). In total, 71% of ‘auditory cue’ and 79% ‘written cue’ neurons also preserved tuning to at least one identical word during internal speech and vocalized speech phases. d , The percentage of neurons tuned during the internal speech phase that were also tuned during the cue phase. Right: 78% of neurons tuned during internal speech were also tuned during the written cue phase. Left: a smaller 47% of neurons tuned during the internal speech phase were also tuned during the auditory cue phase. In total, 71% of neurons preserved tuning between the written cue phase and the internal speech phase, while 42% of neurons preserved tuning between the auditory cue and the internal speech phase.

The strongest shared neural representations were found between visual word recognition, internal speech and vocalized speech (Fig. 6b ). A model trained on internal speech was highly generalizable to both vocalized speech and written cued words, evidence for a possible shared neural code (Fig. 6b , internal). In contrast, the model’s performance was significantly lower when tested on data recorded in the auditory cue phase (Fig. 6a , training phase internal: paired t -test, d.f. 9, P Cue_Internal  < 0.001, Cohen’s d  = 2.16; P Cue_Speech  < 0.001, Cohen’s d  = 3.34). These differences could stem from the inherent challenges in comparing visual and auditory language stimuli, which differ in processing time: instantaneous for text versus several hundred milliseconds for auditory stimuli.

We evaluated the capability of a classification model, initially trained to distinguish words during vocalized speech, in its ability to generalize to internal and cue phases (Fig. 6a,b , training phase speech). The model demonstrated similar levels of generalization during internal speech and in response to written cues, as indicated by the lack of significance in decoding accuracy between the internal and written cue phase (Fig. 6b , training phase speech, cue–internal). However, the model generalized significantly better to internal speech than to representations observed during the auditory cue phase (Fig. 6a , training phase speech, d.f. 9, P Cue_Internal  < 0.001, Cohen’s d  = 2.85).

Neuronal representation of words at the single-neuron level was highly consistent between internal speech, vocalized speech and written cue phases. A high percentage of neurons were not only active during the same task phases but also preserved identical tuning to at least one word (Fig. 6c,d ). In total, 82–85% of neurons active during internal speech were also active during vocalized speech. In 71–79% of neurons, tuning was preserved between the internal speech and vocalized speech phases (Fig. 6c ). During the cue phase, 78% of neurons active during internal speech were also active during the written cue (Fig. 6d , right). However, a lower percentage of neurons (47%) were active during the auditory cue phase (Fig. 6d , left). Similarly, 71% of neurons preserved tuning between the written cue phase and the internal speech phase, while 42% of neurons preserved tuning between the auditory cue phase and the internal speech phase.

Together with the cross-phase analysis, these results suggest strong shared neural representations between internal speech, vocalized speech and the written cue, both at the single-neuron and at the population level.

Robust decoding of multiple internal speech strategies within the SMG

Strong shared neural representations in participant 1 between written, inner and vocalized speech suggest that all three partly represent the same cognitive process or all cognitive processes share common neural features. While internal and vocalized speech have been shown to share common neural features 36 , similarities between internal speech and the written cue could have occurred through several different cognitive processes. For instance, the participant’s observation of the written cue could have activated silent reading. This process has been self-reported as activating internal speech, which can involve ‘hearing’ a voice, thus having an auditory component 42 , 47 . However, the participant could also have mentally pictured an image of the written word while performing internal speech, involving visual imagination in addition to language processes. Both hypotheses could explain the high amount of shared neural representation between the written cue and the internal speech phases (Fig. 6b ).

We therefore compared two possible internal sensory strategies in participant 1: a ‘sound imagination’ strategy in which the participant imagined hearing the word, and a ‘visual imagination’ strategy in which the participant visualized the word’s image (Supplementary Fig. 5a ). Each strategy was cued by the modalities we had previously tested (auditory and written words) (Table 1 ). To assess the similarity of these internal speech processes to other task phases, we conducted a cross-phase decoding analysis (as performed in Fig. 6 ). We hypothesized that, if the high cross-decoding results between internal and written cue phases primarily stemmed from the participant engaging in visual word imagination, we would observe lower decoding accuracies during the auditory imagination phase.

Both strategies demonstrated high representation of the four-word dataset (Supplementary Fig. 5b , highest 94%, chance level 25%). These results suggest our speech BMI decoder is robust to multiple types of internal speech strategy.

The participant described the ‘sound imagination’ strategy as being easier and more similar to the internal speech condition of the first experiment. The participant’s self-reported strategy suggests that no visual imagination was performed during internal speech. Correspondingly, similarities between written cue and internal speech phases may stem from internal speech activation during the silent reading of the cue.

In this work, we demonstrated a decoder for internal and vocalized speech, using single-neuron activity from the SMG. Two chronically implanted, speech-abled participants with tetraplegia were able to use an online, closed-loop internal speech BMI to achieve on average 79% and 23% classification accuracy with 16–32 training trials for an eight-word vocabulary. Furthermore, high decoding was achievable with only 24 s of training data per word, corresponding to 16 trials each with 1.5 s of data. Firing rates recorded from S1 showed generalized activation only during vocalized speech activity, but individual words were not classifiable. In the SMG, shared neural representations between internal speech, the written cue and vocalized speech suggest the occurrence of common processes. Robust control could be achieved using visual and auditory internal speech strategies. Representation of pseudowords provided evidence for a phonetic word encoding component in the SMG.

Single neurons in the SMG encode internal speech

We demonstrated internal speech decoding of six different words and two pseudowords in the SMG. Single neurons increased their firing rates during internal speech (Fig. 2 , S1 and S2), which was also reflected at the population level (Fig. 3a,b,d,e ). Each word was represented in the neuronal population (Fig. 3c,f ). Classification accuracy and tuning during the internal speech phase were significantly higher than during the previous delay phase (Figs. 3b,e and 5a , and Supplementary Figs. 3c,d and 4b ). This evidence suggests that we did not simply decode sustained activity from the cue phase but activity generated by the participant performing internal speech. We obtained significant offline and online internal speech decoding results in two participants (Fig. 5a and Supplementary Fig. 4b ). These findings provide strong evidence for internal speech processing at the single-neuron level in the SMG.

Neurons in S1 are modulated by vocalized but not internal speech

Neural activity recorded from S1 served as a control for synchronized face and lip movements during internal speech. While vocalized speech robustly activated sensory neurons, no increase of baseline activity was observed during the internal speech phase or the auditory and written cue phases in both participants (Fig. 4 , S1). These results underline no synchronized movement inflated our decoding accuracy of internal speech (Supplementary Fig. 4a,c ).

A previous imaging study achieved significant offline decoding of several different internal speech sentences performed by patients with mild ALS 6 . Together with our findings, these results suggest that a BMI speech decoder that does not rely on any movement may translate to communication opportunities for patients suffering from ALS and locked-in syndrome.

Different face activities are observable but not decodable in arm area of S1

The topographic representation of body parts in S1 has recently been found to be less rigid than previously thought. Generalized finger representation was found in a presumably S1 arm region of interest (ROI) 44 . Furthermore, an fMRI paper found observable face and lip activity in S1 leg and hand ROIs. However, differentiation between two lip actions was restricted to the face ROI 43 . Correspondingly, we observed generalized face and lip activity in a predominantly S1 arm region for participant 1 (see ref. 48 for implant location) and a predominantly S1 hand region for participant 2 during vocalized speech (Fig. 4a,d and Supplementary Figs. 1 and 4a,b ). Recorded neural activity contained similar representations for different spoke words (Fig. 4c,e ) and was not significantly decodable (Supplementary Fig. 4a,c ).

Shared neural representations between internal and vocalized speech

The extent to which internal and vocalized speech generalize is still debated 35 , 42 , 49 and depends on the investigated brain area 36 , 50 . In this work, we found on average stronger representation for vocalized (74%) than internal speech (Fig. 5a , 55%) in participant 1 but the opposite effect in participant 2 (Supplementary Fig. 4b , 24% internal, 21% vocalized speech). Additionally, cross-phase decoding of vocalized speech from models trained on data during internal speech resulted in comparable classification accuracies to those of internal speech (Fig. 6a,b , internal). Most neurons tuned during internal speech were also tuned to at least one of the same words during vocalized speech (71–79%; Fig. 6c ). However, some neurons were only tuned during internal speech, or to different words. These observations also applied to firing rates of individual neurons. Here, we observed neurons that had higher peak rates during the internal speech phase than the vocalized speech phase (Supplementary Fig. 1 : swimming and cowboy). Together, these results further suggest neural signatures during internal and vocalized speech are similar but distinct from one another, emphasizing the need for developing speech models from data recorded directly on internal speech production 51 .

Similar observations were made when comparing internal speech processes with visual word processes. In total, 79% of neurons were active both in the internal speech phase and the written cue phase, and 79% preserved the same tuning (Fig. 6d , written cue). Additionally, high cross-decoding between both phases was observed (Fig. 6b , internal).

Shared representation between speech and written cue presentation

Observation of a written cue may engage a variety of cognitive processes, such as visual feature recognition, semantic understanding and/or related language processes, many of which modulate similar cortical regions as speech 45 . Studies have found that silent reading can evoke internal speech; it can be modulated by a presumed author’s speaking speed, voice familiarity or regional accents 35 , 42 , 47 , 52 , 53 . During silent reading of a cued sentence with a neutral versus increased prosody (madeleine brought me versus MADELEINE brought me), one study in particular found that increased left SMG activation correlated with the intensity of the produced inner speech 54 .

Our data demonstrated high cross-phase decoding accuracies between both written cue and speech phases in our first participant (Fig. 6b ). Due to substantial shared neural representation, we hypothesize that the participant’s silent reading during the presentation of the written cue may have engaged internal speech processes. However, this same shared representation could have occurred if visual processes were activated in the internal speech phase. For instance, the participant could have performed mental visualization of the written word instead of generating an internal monologue, as the subjective perception of internal speech may vary between individuals.

Investigating internal speech strategies

In a separate experiment, participant 1 was prompted to execute different mental strategies during the internal speech phase, consisting of ‘sound imagination’ or ‘visual word imagination’ (Supplementary Fig. 5a ). We found robust decoding during the internal strategy phase, regardless of which mental strategy was performed (Supplementary Fig. 5b ). This participant reported the sound strategy was easier to execute than the visual strategy. Furthermore, this participant reported that the sound strategy was more similar to the internal speech strategy employed in prior experiments. This self-report suggests that the patient did not perform visual imagination during the internal speech task. Therefore, shared neural representation between internal and written word phases during the internal speech task may stem from silent reading of the written cue. Since multiple internal mental strategies are decodable from SMG, future patients could have flexibility with their preferred strategy. For instance, people with a strong visual imagination may prefer performing visual word imagination.

Audio contamination in decoding result

Prior studies examining neural representation of attempted or vocalized speech must potentially mitigate acoustic contamination of electrophysiological brain signals during speech production 55 . During internal speech production, no detectable audio was captured by the audio equipment or noticed by the researchers in the room. In the rare cases the participant spoke during internal speech (three trials), the trials were removed. Furthermore, if audio had contaminated the neural data during the auditory cue or vocalized speech, we would have probably observed significant decoding in all channels. However, no significant classification was detected in S1 channels during the auditory cue phase nor the vocalized speech phase (Supplementary Fig. 2b ). We therefore conclude that acoustic contamination did not artificially inflate observed classification accuracies during vocalized speech in the SMG.

Single-neuron modulation during internal speech with a second participant

We found single-neuron modulation to speech processes in a second participant (Figs. 2d,e and 3f , and Supplementary Fig. 2d ), as well as significant offline and online classification accuracies (Fig. 5a and Supplementary Fig. 4b ), confirming neural representation of language processes in the SMG. The number of neurons distinctly active for different words was lower compared with the first participant (Fig. 2e and Supplementary Fig. 3d ), limiting our ability to decode with high accuracy between words in the different task phases (Fig. 5a and Supplementary Fig. 4b ).

Previous work found that single neurons in the PPC exhibited a common neural substrate for written action verbs and observed actions 56 . Another study found that single neurons in the PPC also encoded spoken numbers 57 . These recordings were made in the superior parietal lobule whereas the SMG is in the inferior parietal lobule. Thus, it would appear that language-related activity is highly distributed across the PPC. However, the difference in strength of language representation between each participant in the SMG suggests that there is a degree of functional segregation within the SMG 37 .

Different anatomical geometries of the SMG between participants mean that precise comparisons of implanted array locations become difficult (Fig. 1 ). Implant locations for both participants were informed from pre-surgical anatomical/vasculature scans and fMRI tasks designed to evoke activity related to grasp and dexterous hand movements 48 . Furthermore, the number of electrodes of the implanted array was higher in the first participant (96) than in the second participant (64). A pre-surgical assessment of functional activity related to language and speech may be required to determine the best candidate implant locations within the SMG for online speech decoding applications.

Impact on BMI applications

In this work, an online internal speech BMI achieved significant decoding from single-neuron activity in the SMG in two participants with tetraplegia. The online decoders were trained on as few as eight repetitions of 1.5 s per word, demonstrating that meaningful classification accuracies can be obtained with only a few minutes’ worth of training data per day. This proof-of-concept suggests that the SMG may be able to represent a much larger internal vocabulary. By building models on internal speech directly, our results may translate to people who cannot vocalize speech or are completely locked in. Recently, ref. 26 demonstrated a BMI speller that decoded attempted speech of the letters of the NATO alphabet and used those to construct sentences. Scaling our vocabulary to that size could allow for an unrestricted internal speech speller.

To summarize, we demonstrate the SMG as a promising candidate to build an internal brain–machine speech device. Different internal speech strategies were decodable from the SMG, allowing patients to use the methods and languages with which they are most comfortable. We found evidence for a phonetic component during internal and vocalized speech. Adding to previous findings indicating grasp decoding in the SMG 23 , we propose the SMG as a multipurpose BMI area.

Experimental model and participant details

Two male participants with tetraplegia (33 and 39 years) were recruited for an institutional review board- and Food and Drug Administration-approved clinical trial of a BMI and gave informed consent to participate (Institutional Review Board of Rancho Los Amigos National Rehabilitation Center, Institutional Review Board of California Institute of Technology, clinical trial registration NCT01964261 ). This clinical trial evaluated BMIs in the PPC and the somatosensory cortex for grasp rehabilitation. One of the primary effectiveness objectives of the study is to evaluate the effectiveness of the neuroport in controlling virtual or physical end effectors. Signals from the PPC will allow the subjects to control the end effector with accuracy greater than chance. Participants were compensated for their participation in the study and reimbursed for any travel expenses related to participation in study activities. The authors affirm that the human research participant provided written informed consent for publication of Supplementary Video 1 . The first participant suffered a spinal cord injury at cervical level C5 1.5 years before participating in the study. The second participant suffered a C5–C6 spinal cord injury 3 years before implantation.

Method details

Data were collected from implants located in the left SMG and the left S1 (for anatomical locations, see Fig. 1 ). For description of pre-surgical planning, localization fMRI tasks, surgical techniques and methodologies, see ref. 48 . Placement of electrodes was based on fMRI tasks involving grasp and dexterous hand movements.

The first participant underwent surgery in November 2016 to implant two 96-channel platinum-tipped multi-electrode arrays (NeuroPort Array, Blackrock Microsystems) in the SMG and in the ventral premotor cortex and two 7 × 7 sputtered iridium oxide film (SIROF)-tipped microelectrode arrays with 48 channels each in the hand and arm area of S1. Data were collected between July 2021 and August 2022. The second participant underwent surgery in October 2022 and was implanted with SIROF-tipped 64-channel microelectrode arrays in S1 (two arrays), SMG, ventral premotor cortex and primary motor cortex. Data were collected in January 2023.

Data collection

Recording began 2 weeks after surgery and continued one to three times per week. Data for this work were collected between 2021 and 2023. Broadband electrical activity was recorded from the NeuroPort Arrays using Neural Signal Processors (Blackrock Microsystems). Analogue signals were amplified, bandpass filtered (0.3–7,500 Hz) and digitized at 30,000 samples s −1 . To identify putative action potentials, these broadband data were bandpass filtered (250–5,000 Hz) and thresholded at −4.5 the estimated root-mean-square voltage of the noise. For some of the analyses, waveforms captured at these threshold crossings were then spike sorted by manually assigning each observation to a putative single neuron; for others, multiunit activity was considered. For participant 1, an average of 33 sorted SMG units (between 22 and 56) and 83 sorted S1 units (between 59 and 96) were recorded per session. For participant 2, an average of 80 sorted SMG units (between 69 and 92) and 81 sorted S1 units (between 61 and 101) were recorded per session. Auditory data were recorded at 30,000 Hz simultaneously to the neural data. Background noise was reduced post-recording by using the noise reduction function of the program ‘Audible’.

Experimental tasks

We implemented different tasks to study language processes in the SMG. The tasks cued six words informed by ref. 31 (spoon, python, battlefield, cowboy, swimming and telephone) as well as two pseudowords (bindip and nifzig). The participants were situated 1 m in front of a light-emitting diode screen (1,190 mm screen diagonal), where the task was visualized. The task was implemented using the Psychophysics Toolbox 58 , 59 , 60 extension for MATLAB. Only the written cue task was used for participant 2.

Auditory cue task

Each trial consisted of six phases, referred to in this paper as ITI, cue, D1, internal, D2 and speech. The trial began with a brief ITI (2 s), followed by a 1.5-s-long cue phase. During the cue phase, a speaker emitted the sound of one of the eight words (for example, python). Word duration varied between 842 and 1,130 ms. Then, after a delay period (grey circle on screen; 0.5 s), the participant was instructed to internally say the cued word (orange circle on screen; 1.5 s). After a second delay (grey circle on screen; 0.5 s), the participant vocalized the word (green circle on screen, 1.5 s).

Written cue task

The task was identical to the auditory cue task, except words were cued in writing instead of sound. The written word appeared on the screen for 1.5 s during the cue phase. The auditory cue was played between 200 ms and 650 ms later than the written cue appeared on the screen, due to the utilization of varied sound outputs (direct computer audio versus Bluetooth speaker).

One auditory cue task and one written cue task were recorded on ten individual session days in participant 1. The written cue task was recorded on seven individual session days in participant 2.

Control experiments

Three experiments were run to investigate internal strategies and phonetic versus semantic processing.

Internal strategy task

The task was designed to vary the internal strategy employed by the participant during the internal speech phase. Two internal strategies were tested: a sound imagination and a visual imagination. For the ‘sound imagination’ strategy, the participant was instructed to imagine what the sound of the word sounded like. For the ‘visual imagination’ strategy, the participant was instructed to perform mental visualization from the written word. We also tested if the cue modality (auditory or written) influenced the internal strategy. A subset of four words were used for this experiment. This led to four different variations of the task.

The internal strategy task was run on one session day with participant 1.

Online task

The ‘written cue task’ was used for the closed-loop experiments. To obtain training data for the online task, a written cue task was run. Then, a classification model was trained only on the internal speech data of the task (see ‘Classification’ section). The closed-loop task was nearly identical to the ‘written cue task’ but replaced the vocalized speech phase by a feedback phase. Feedback was provided by showing the word on the screen either in green if correctly classified or in red if wrongly classified. See Supplementary Video 1 for an example of the participant performing the online task. The online task was run on three individual session days.

Error trials

Trials in which participants accidentally spoke during the internal speech part (3 trials) or said the wrong word during the vocalized speech part (20 trials) were removed from all analysis.

Total number of recording trials

For participant 1, we collected offline datasets composed of eight trials per word across ten sessions. Trials during which participant errors occurred were excluded. In total, between 156 and 159 trials per word were included, with a total of 1,257 trials for offline analysis. On four non-consecutive session days, the auditory cue task was run first, and on six non-consecutive days, the written cue task was run first. For online analysis, datasets were recorded on three different session days, for a total of 304 trials. Participant 2 underwent a similar data collection process, with offline datasets comprising 16 trials per word using the written cue modality over nine sessions. Error trials were excluded. In total, between 142 and 144 trials per word were kept, with a total of 1,145 trials for offline analysis. For online analysis, datasets were recorded on three session days, leading to a total of 448 online trials.

Quantification and statistical analysis

Analyses were performed using MATLAB R2020b and Python, version 3.8.11.

Neural firing rates

Firing rates of sorted units were computed as the number of spikes occurring in 50-ms bins, divided by the bin width and smoothed using a Gaussian filter with kernel width of 50 ms to form an estimate of the instantaneous firing rates (spikes s −1 ).

Linear regression tuning analysis

To identify units exhibiting selective firing rate patterns (or tuning) for each of the eight words, linear regression analysis was performed in two different ways: (1) step by step in 50-ms time bins to allow assessing changes in neuronal tuning over the entire trial duration; (2) averaging the firing rate in each task phase to compare tuning between phases. The model returns a fit that estimates the firing rate of a unit on the basis of the following variables:

where FR corresponds to the firing rate of the unit, β 0 to the offset term equal to the average ITI firing rate of the unit, X is the vector indicator variable for each word w , and β w corresponds to the estimated regression coefficient for word w . W was equal to 8 (battlefield, cowboy, python, spoon, swimming, telephone, bindip and nifzig) 23 .

In this model, β symbolizes the change of firing rate from baseline for each word. A t -statistic was calculated by dividing each β coefficient by its standard error. Tuning was based on the P value of the t -statistic for each β coefficient. A follow-up analysis was performed to adjust for false discovery rate (FDR) between the P values 61 , 62 . A unit was defined as tuned if the adjusted P value is <0.05 for at least one word. This definition allowed for tuning of a unit to zero, one or multiple words during different timepoints of the trial. Linear regression was performed for each session day individually. A 95% confidence interval of the mean was computed by performing the Student’s t -inverse cumulative distribution function over the ten sessions.

Kruskal–Wallis tuning analysis

As an alternative tuning definition, differences in firing rates between words were tested using the Kruskal–Wallis test, the non-parametric analogue to the one-way analysis of variance (ANOVA). For each neuron, the analysis was performed to evaluate the null hypothesis that data from each word come from the same distribution. A follow-up analysis was performed to adjust for FDR between the P values for each task phase 61 , 62 . A unit was defined as tuned during a phase if the adjusted P value was smaller than α  = 0.05.

Classification

Using the neuronal firing rates recorded during the tasks, a classifier was used to evaluate how well the set of words could be differentiated during each phase. Classifiers were trained using averaged firing rates over each task phase, resulting in six matrices of size n ,  m , where n corresponds to the number of trials and m corresponds to the number of recorded units. A model for each phase was built using LDA, assuming an identical covariance matrix for each word, which resulted in best classification accuracies. Leave-one-out CV was performed to estimate decoding performance, leaving out a different trial across neurons at each loop. PCA was applied on the training data, and PCs explaining more than 95% of the variance were selected as features and applied to the single testing trial. A 95% confidence interval of the mean was computed as described above.

Cross-phase classification

To estimate shared neural representations between different task phases, we performed cross-phase classification. The process consisted in training a classification model (as described above) on one of the task phases (for example, ITI) and to test it on the ITI, cue, imagined speech and vocalized speech phases. The method was repeated for each of the ten sessions individually, and a 95% confidence interval of the mean was computed. Significant differences in classification accuracies between phases decoded with the same model were evaluated using a paired two-tailed t -test. FDR correction of the P values was performed (‘Linear regression tuning analysis’) 61 , 62 .

Classification performance significance testing

To assess the significance of classification performance, a null dataset was created by repeating classification 100 times with shuffled labels. Then, different percentile levels of this null distribution were computed and compared to the mean of the actual data. Mean classification performances higher than the 97.5th percentile were denoted with P < 0.05 and higher than 99.5th percentile were denoted with P < 0.01.

dPCA analysis

dPCA was performed on the session data to study the activity of the neuronal population in relation to the external task parameters: cue modality and word. Kobak et al. 63 introduced dPCA as a refinement of their earlier dimensionality reduction technique (of the same name) that attempts to combine the explanatory strengths of LDA and PCA. By deconstructing neuronal population activity into individual components, each component relates to a single task parameter 64 .

This text follows the methodology outlined by Kobak et al. 63 . Briefly, this involved the following steps for N neurons:

First, unlike in PCA, we focused not on the matrix, X , of the original data, but on the matrices of marginalizations, X ϕ . The marginalizations were computed as neural activity averaged over trials, k , and some task parameters in analogy to the covariance decomposition done in multivariate analysis of variance. Since our dataset has three parameters: timing, t , cue modality, \(c\) (for example, auditory or visual), and word, w (eight different words), we obtained the total activity as the sum of the average activity with the marginalizations and a final noise term

The above notation of Kobak et al. is the same as used in factorial ANOVA, that is, \({X}_{{tcwk}}\) is the matrix of firing rates for all neurons, \(< \bullet { > }_{{ab}}\) is the average over a set of parameters \(a,b,\ldots\) , \(\bar{X}= < {X}_{{tcwk}}{ > }_{{tcwk}}\) , \({\bar{X}}_{t}= < {X}_{{tcwk}}-\bar{X}{ > }_{{cwk}}\) , \({\bar{X}}_{{tc}}= < {X}_{{tcwk}}-\bar{X}-{\bar{X}}_{t}-{\bar{X}}_{c}-{\bar{X}}_{w}{ > }_{{wk}}\) and so on. Finally, \({{{\epsilon }}}_{{tcwk}}={X}_{{tcwk}}- < {X}_{{tcwk}}{ > }_{k}\) .

Participant 1 datasets were composed of N  = 333 (SMG), N  = 828 (S1) and k  = 8. Participant 2 datasets were composed of N  = 547 (SMG), N  = 522 (S1) and k  = 16. To create balanced datasets, error trials were replaced by the average firing rate of k  − 1 trials.

Our second step reduced the number of terms by grouping them as seen by the braces in the equation above, since there is no benefit in demixing a time-independent pure task, \(a\) , term \({\bar{X}}_{a}\) from the time–task interaction terms \({\bar{X}}_{{ta}}\) since all components are expected to change with time. The above grouping reduced the parametrization down to just five marginalization terms and the noise term (reading in order): the mean firing rate, the task-independent term, the cue modality term, the word term, the cue modality–word interaction term and the trial-to-trial noise.

Finally, we gained extra flexibility by having two separate linear mappings \({F}_{\varphi }\) for encoding and \({D}_{\varphi }\) for decoding (unlike in PCA, they are not assumed to be transposes of each other). These matrices were chosen to minimize the loss function (with a quadratic penalty added to avoid overfitting):

Here, \({{\mu }}=(\lambda\Vert X\Vert)^{2}\) , where λ was optimally selected through tenfold CV in each dataset.

We refer the reader to Kobak et al. for a description of the full analytic solution.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Data availability

The data supporting the findings of this study are openly available via Zenodo at https://doi.org/10.5281/zenodo.10697024 (ref. 65 ). Source data are provided with this paper.

Code availability

The custom code developed for this study is openly available via Zenodo at https://doi.org/10.5281/zenodo.10697024 (ref. 65 ).

Hecht, M. et al. Subjective experience and coping in ALS. Amyotroph. Lateral Scler. Other Mot. Neuron Disord. 3 , 225–231 (2002).

Google Scholar  

Aflalo, T. et al. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348 , 906–910 (2015).

CAS   PubMed   PubMed Central   Google Scholar  

Andersen, R. A. Machines that translate wants into actions. Scientific American https://www.scientificamerican.com/article/machines-that-translate-wants-into-actions/ (2019).

Andersen, R. A., Aflalo, T. & Kellis, S. From thought to action: the brain–machine interface in posterior parietal cortex. Proc. Natl Acad. Sci. USA 116 , 26274–26279 (2019).

Andersen, R. A., Kellis, S., Klaes, C. & Aflalo, T. Toward more versatile and intuitive cortical brain machine interfaces. Curr. Biol. 24 , R885–R897 (2014).

Dash, D., Ferrari, P. & Wang, J. Decoding imagined and spoken phrases from non-invasive neural (MEG) signals. Front. Neurosci. 14 , 290 (2020).

PubMed   PubMed Central   Google Scholar  

Luo, S., Rabbani, Q. & Crone, N. E. Brain–computer interface: applications to speech decoding and synthesis to augment communication. Neurotherapeutics https://doi.org/10.1007/s13311-022-01190-2 (2022).

Article   PubMed   PubMed Central   Google Scholar  

Martin, S., Iturrate, I., Millán, J. D. R., Knight, R. T. & Pasley, B. N. Decoding inner speech using electrocorticography: progress and challenges toward a speech prosthesis. Front. Neurosci. 12 , 422 (2018).

Rabbani, Q., Milsap, G. & Crone, N. E. The potential for a speech brain–computer interface using chronic electrocorticography. Neurotherapeutics 16 , 144–165 (2019).

Lopez-Bernal, D., Balderas, D., Ponce, P. & Molina, A. A state-of-the-art review of EEG-based imagined speech decoding. Front. Hum. Neurosci. 16 , 867281 (2022).

Nicolas-Alonso, L. F. & Gomez-Gil, J. Brain computer interfaces, a review. Sensors 12 , 1211–1279 (2012).

Herff, C., Krusienski, D. J. & Kubben, P. The potential of stereotactic-EEG for brain–computer interfaces: current progress and future directions. Front. Neurosci. 14 , 123 (2020).

Angrick, M. et al. Speech synthesis from ECoG using densely connected 3D convolutional neural networks. J. Neural Eng. https://doi.org/10.1088/1741-2552/ab0c59 (2019).

Herff, C. et al. Generating natural, intelligible speech from brain activity in motor, premotor, and inferior frontal cortices. Front. Neurosci. 13 , 1267 (2019).

Kellis, S. et al. Decoding spoken words using local field potentials recorded from the cortical surface. J. Neural Eng. 7 , 056007 (2010).

Makin, J. G., Moses, D. A. & Chang, E. F. Machine translation of cortical activity to text with an encoder–decoder framework. Nat. Neurosci. 23 , 575–582 (2020).

Metzger, S. L. et al. A high-performance neuroprosthesis for speech decoding and avatar control. Nature 620 , 1037–1046 (2023).

Moses, D. A. et al. Neuroprosthesis for decoding speech in a paralyzed person with anarthria. N. Engl. J. Med. 385 , 217–227 (2021).

Guenther, F. H. et al. A wireless brain–machine interface for real-time speech synthesis. PLoS ONE 4 , e8218 (2009).

Stavisky, S. D. et al. Neural ensemble dynamics in dorsal motor cortex during speech in people with paralysis. eLife 8 , e46015 (2019).

Wilson, G. H. et al. Decoding spoken English from intracortical electrode arrays in dorsal precentral gyrus. J. Neural Eng. 17 , 066007 (2020).

Willett, F. R. et al. A high-performance speech neuroprosthesis. Nature 620 , 1031–1036 (2023).

Wandelt, S. K. et al. Decoding grasp and speech signals from the cortical grasp circuit in a tetraplegic human. Neuron https://doi.org/10.1016/j.neuron.2022.03.009 (2022).

Anumanchipalli, G. K., Chartier, J. & Chang, E. F. Speech synthesis from neural decoding of spoken sentences. Nature 568 , 493–498 (2019).

Bocquelet, F., Hueber, T., Girin, L., Savariaux, C. & Yvert, B. Real-time control of an articulatory-based speech synthesizer for brain computer interfaces. PLoS Comput. Biol. 12 , e1005119 (2016).

Metzger, S. L. et al. Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Nat. Commun. 13 , 6510 (2022).

Meng, K. et al. Continuous synthesis of artificial speech sounds from human cortical surface recordings during silent speech production. J. Neural Eng. https://doi.org/10.1088/1741-2552/ace7f6 (2023).

Proix, T. et al. Imagined speech can be decoded from low- and cross-frequency intracranial EEG features. Nat. Commun. 13 , 48 (2022).

Pei, X., Barbour, D. L., Leuthardt, E. C. & Schalk, G. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. J. Neural Eng. 8 , 046028 (2011).

Ikeda, S. et al. Neural decoding of single vowels during covert articulation using electrocorticography. Front. Hum. Neurosci. 8 , 125 (2014).

Martin, S. et al. Word pair classification during imagined speech using direct brain recordings. Sci. Rep. 6 , 25803 (2016).

Angrick, M. et al. Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity. Commun. Biol. 4 , 1055 (2021).

Price, C. J. The anatomy of language: a review of 100 fMRI studies published in 2009. Ann. N. Y. Acad. Sci. 1191 , 62–88 (2010).

PubMed   Google Scholar  

Langland-Hassan, P. & Vicente, A. Inner Speech: New Voices (Oxford Univ. Press, 2018).

Perrone-Bertolotti, M., Rapin, L., Lachaux, J.-P., Baciu, M. & Lœvenbruck, H. What is that little voice inside my head? Inner speech phenomenology, its role in cognitive performance, and its relation to self-monitoring. Behav. Brain Res. 261 , 220–239 (2014).

CAS   PubMed   Google Scholar  

Pei, X. et al. Spatiotemporal dynamics of electrocorticographic high gamma activity during overt and covert word repetition. NeuroImage 54 , 2960–2972 (2011).

Oberhuber, M. et al. Four functionally distinct regions in the left supramarginal gyrus support word processing. Cereb. Cortex 26 , 4212–4226 (2016).

Binder, J. R. Current controversies on Wernicke’s area and its role in language. Curr. Neurol. Neurosci. Rep. 17 , 58 (2017).

Geva, S. et al. The neural correlates of inner speech defined by voxel-based lesion–symptom mapping. Brain 134 , 3071–3082 (2011).

Cooney, C., Folli, R. & Coyle, D. Opportunities, pitfalls and trade-offs in designing protocols for measuring the neural correlates of speech. Neurosci. Biobehav. Rev. 140 , 104783 (2022).

Dash, D. et al. Interspeech (International Speech Communication Association, 2020).

Alderson-Day, B. & Fernyhough, C. Inner speech: development, cognitive functions, phenomenology, and neurobiology. Psychol. Bull. 141 , 931–965 (2015).

Muret, D., Root, V., Kieliba, P., Clode, D. & Makin, T. R. Beyond body maps: information content of specific body parts is distributed across the somatosensory homunculus. Cell Rep. 38 , 110523 (2022).

Rosenthal, I. A. et al. S1 represents multisensory contexts and somatotopic locations within and outside the bounds of the cortical homunculus. Cell Rep. 42 , 112312 (2023).

Leuthardt, E. et al. Temporal evolution of gamma activity in human cortex during an overt and covert word repetition task. Front. Hum. Neurosci. 6 , 99 (2012).

Indefrey, P. & Levelt, W. J. M. The spatial and temporal signatures of word production components. Cognition 92 , 101–144 (2004).

Alderson-Day, B., Bernini, M. & Fernyhough, C. Uncharted features and dynamics of reading: voices, characters, and crossing of experiences. Conscious. Cogn. 49 , 98–109 (2017).

Armenta Salas, M. et al. Proprioceptive and cutaneous sensations in humans elicited by intracortical microstimulation. eLife 7 , e32904 (2018).

Cooney, C., Folli, R. & Coyle, D. Neurolinguistics research advancing development of a direct-speech brain–computer interface. iScience 8 , 103–125 (2018).

Soroush, P. Z. et al. The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings. NeuroImage https://doi.org/10.1016/j.neuroimage.2023.119913 (2023).

Soroush, P. Z. et al. The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings. NeuroImage 269 , 119913 (2023).

Alexander, J. D. & Nygaard, L. C. Reading voices and hearing text: talker-specific auditory imagery in reading. J. Exp. Psychol. Hum. Percept. Perform. 34 , 446–459 (2008).

Filik, R. & Barber, E. Inner speech during silent reading reflects the reader’s regional accent. PLoS ONE 6 , e25782 (2011).

Lœvenbruck, H., Baciu, M., Segebarth, C. & Abry, C. The left inferior frontal gyrus under focus: an fMRI study of the production of deixis via syntactic extraction and prosodic focus. J. Neurolinguist. 18 , 237–258 (2005).

Roussel, P. et al. Observation and assessment of acoustic contamination of electrophysiological brain signals during speech production and sound perception. J. Neural Eng. 17 , 056028 (2020).

Aflalo, T. et al. A shared neural substrate for action verbs and observed actions in human posterior parietal cortex. Sci. Adv. 6 , eabb3984 (2020).

Rutishauser, U., Aflalo, T., Rosario, E. R., Pouratian, N. & Andersen, R. A. Single-neuron representation of memory strength and recognition confidence in left human posterior parietal cortex. Neuron 97 , 209–220.e3 (2018).

Brainard, D. H. The psychophysics toolbox. Spat. Vis. 10 , 433–436 (1997).

Pelli, D. G. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis. 10 , 437–442 (1997).

Kleiner, M. et al. What’s new in psychtoolbox-3. Perception 36 , 1–16 (2007).

Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. B 57 , 289–300 (1995).

Benjamini, Y. & Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. Ann. Stat. 29 , 1165–1188 (2001).

Kobak, D. et al. Demixed principal component analysis of neural population data. eLife 5 , e10989 (2016).

Kobak, D. dPCA. GitHub https://github.com/machenslab/dPCA (2020).

Wandelt, S. K. Data associated to manuscript “Representation of internal speech by single neurons in human supramarginal gyrus”. Zenodo https://doi.org/10.5281/zenodo.10697024 (2024).

Download references

Acknowledgements

We thank L. Bashford and I. Rosenthal for helpful discussions and data collection. We thank our study participants for their dedication to the study that made this work possible. This research was supported by the NIH National Institute of Neurological Disorders and Stroke Grant U01: U01NS098975 and U01: U01NS123127 (S.K.W., D.A.B., K.P., C.L. and R.A.A.) and by the T&C Chen Brain-Machine Interface Center (S.K.W., D.A.B. and R.A.A.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the paper.

Author information

Authors and affiliations.

Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA

Sarah K. Wandelt, David A. Bjånes, Kelsie Pejsa, Brian Lee, Charles Liu & Richard A. Andersen

T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA

Sarah K. Wandelt, David A. Bjånes, Kelsie Pejsa & Richard A. Andersen

Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA

David A. Bjånes & Charles Liu

Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA

Brian Lee & Charles Liu

USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA

You can also search for this author in PubMed   Google Scholar

Contributions

S.K.W., D.A.B. and R.A.A. designed the study. S.K.W. and D.A.B. developed the experimental tasks and collected the data. S.K.W. analysed the results and generated the figures. S.K.W., D.A.B. and R.A.A. interpreted the results and wrote the paper. K.P. coordinated regulatory requirements of clinical trials. C.L. and B.L. performed the surgery to implant the recording arrays.

Corresponding author

Correspondence to Sarah K. Wandelt .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Human Behaviour thanks Abbas Babajani-Feremi, Matthew Nelson and Blaise Yvert for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Figs. 1–5.

Reporting Summary

Peer review file, supplementary video 1.

The video shows the participant performing the internal speech task in real time. The participant is cued with a word on the screen. After a delay, an orange dot appears, during which the participant performs internal speech. Then, the decoded word appears on the screen, in green if it is correctly decoded and in red if it is wrongly decoded.

Supplementary Data

Source data for Fig. 3.

Source data for Fig. 4.

Source data for Fig. 5.

Source Data Fig. 3

Statistical source data.

Source Data Fig. 5

Source data fig. 6, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Wandelt, S.K., Bjånes, D.A., Pejsa, K. et al. Representation of internal speech by single neurons in human supramarginal gyrus. Nat Hum Behav (2024). https://doi.org/10.1038/s41562-024-01867-y

Download citation

Received : 15 May 2023

Accepted : 16 March 2024

Published : 13 May 2024

DOI : https://doi.org/10.1038/s41562-024-01867-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Brain-reading device is best yet at decoding ‘internal speech’.

  • Miryam Naddaf

Nature (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

data representation word in hindi

IMAGES

  1. Data Representation in Hindi

    data representation word in hindi

  2. Data Representation and Conversions in Urdu/Hindi || Assembly Language Lecture 2 Easy Understanding

    data representation word in hindi

  3. representation of data in hindi

    data representation word in hindi

  4. Data Interpretation (Complete Concept) in Hindi

    data representation word in hindi

  5. Graphical representation of statistical data in Hindi

    data representation word in hindi

  6. (Hindi) GRE: Data Representation By Satyam Yadav

    data representation word in hindi

VIDEO

  1. Word representation 20240327 #2 of 2

  2. Data Abstraction in DBMS in Hindi

  3. Data

  4. What is data? #Data kya hota hai || Types of data in hindi

  5. What is Data Science ?| Introduction

  6. Graphical Representation of Data- Chapter 3- Class 12 NCERT

COMMENTS

  1. Data Representation in Hindi / डाटा रिप्रजेंटेशन क्या है?

    Data Representation in Hindi : Data representation का अर्थ हैं कैसे हम किसी डाटा को represent करते हैं अर्थात् कैसे किसी डाटा को दर्शाते हैं, यहां पर डाटा representation दो शब्दों से मिलकर बना हैं ...

  2. Data Representation in Hindi

    Introduction - Data Representation क्रमश: दो शब्‍दों से मिलकर बना है पहला Data जिसे हम आसान शब्‍दों में कहें तो डिजिटल Information या जानकारी कहते हैं । तथा Representation का अर्थ निरूपण ...

  3. Data Representation in hindi-डाटा रिप्रजेंटेशन क्या है?

    data reperesentation का परिचय. information को विभिन्न रूपों जैसे की text,numbers ,images ,audio,video में आता है. text. data communication में text कोएक bit pattern ,जोकि bits(0s अथवा 1s) की एक sequence होती है जो की रूप में represent किया ...

  4. Data Representation (Hindi)

    Topics: Data RepresentationFeel free to share this videoComputer Organization and Architecture Complete Video Tutorial Playlist:https://goo.gl/PJBdb5Check Ou...

  5. Google Translate

    Google's service, offered free of charge, instantly translates words, phrases, and web pages between English and over 100 other languages.

  6. data representation

    What is data representation meaning in Hindi? The word or phrase data representation refers to . See data representation meaning in Hindi, data representation definition, translation and meaning of data representation in Hindi. Learn and practice the pronunciation of data representation. Find the answer of what is the meaning of data ...

  7. डाटा क्या है और इसके प्रकार

    डाटा क्या है (What is Data in Hindi) Data को हम ऐसे कह सकते हैं की ये एक representation होता है facts, concepts, या instructions का एक formalized manner में, जो की suitable होता है communication, interpretation, या processing के लिए इन्सान या ...

  8. data representation

    data representation का अर्थ क्या है? data representation का अर्थ, अनुवाद, उदाहरण, पर्यायवाची, विपरीत, परिभाषा और तुकांत शब्द। data representation का मीनिंग। ... English Hindi Dictionary ...

  9. Data Representation in Data Communication and Network

    Explained Data Representation in Data Communication and Network | Form of Data in Hindi | Techmoodly🎯 More Information on Video:Information today comes in d...

  10. DATA REPRESENTATION MEANING IN HINDI

    Data representation meaning in Hindi : Get meaning and translation of Data representation in Hindi language with grammar,antonyms,synonyms and sentence usages by ShabdKhoj. Know answer of question : what is meaning of Data representation in Hindi? Data representation ka matalab hindi me kya hai (Data representation का हिंदी में मतलब ).

  11. Data Representation in Computer (in Hindi)

    In this video you will learn three following thing's Number systems, binary, octal & hexadecimal. (Hindi) Computer Science Class 11 - Fundamentals of C++. 37 lessons • 4h 9m. 1. Course Overview (in Hindi) 8:15mins. 2. Understanding Programming Language, Compiler and Machine Language (in Hindi)

  12. IIT JEE

    Data Representation (in Hindi) Lesson 3 of 9 • 14 upvotes • 11:23mins. Shivshant Tripathi. Tabular and graphical representation of data.In tabular representation I have discussed about frequency distribution and cumulative frequency distribution while in graphical I have discussed about barchart,pie chart,histogram and frequency polygon.

  13. Word Embeddings for Indian Languages

    Word embedding is a term used for the representation of words ... on Wiki data and can be downloaded from here. Below is a code snippet that shows loading fastText word embeddings for Hindi.

  14. Data Representation

    Mantissa, Significand and fraction are synonymously used terms. In the computer, the representation is binary and the binary point is not fixed. For example, a number, say, 23.345 can be written as 2.3345 x 101 or 0.23345 x 102 or 2334.5 x 10-2. The representation 2.3345 x 101 is said to be in normalised form.

  15. representation हिंदी में

    representation translate: (किसी व्यक्ति या संस्था द्वारा औपचारिक रूप से किसी का ...

  16. Numerical Data Representation in Hindi

    #coa #howtopassCOa #Lastmomenttuitions #lmtTo get the study materials for final yeat(Notes, video lectures, previous years, semesters question papers)https:/...

  17. डेटा और इनफॉर्मेशन मे क्या अंतर है? (Data and Information in Hindi

    Difference Between Data and Information in Hindi: यदि आप कंप्यूटर साइंस के स्टूडेंट है तो, आपने डेटा (Data) और जानकारी (Information) जैसे शब्दों को जरूर सुना होगा. कंप्यूटर के क्षेत्र से जुड़ा कोई ...

  18. REPRESENTATION MEANING IN HINDI

    Definition of Representation. a presentation to the mind in the form of an idea or image. a creation that is a visual or tangible rendering of someone or something. the act of representing; standing in for someone or some group and speaking with authority in their behalf. more synonym details >>.

  19. representation in Hindi

    representation translate: (किसी व्यक्ति या संस्था द्वारा औपचारिक रूप से किसी का ...

  20. representation

    a creation that is a visual or tangible rendering of someone or something. the right of being represented by delegates who have a voice in some legislative body. a presentation to the mind in the form of an idea or image. Synonyms. internal representation, mental representation. a statement of facts and reasons made in appealing or protesting.

  21. Handwritten Hindi character recognition: a review

    This data representation is chosen in such a way that it aids upcoming phases, mainly feature extraction phase. Few preprocessing techniques which are used for character recognition are summarised further. ... Word recognition: The characters in Hindi language are connected by headline. For word recognition, there is a need to detect this ...

  22. Hindi translation of 'representation'

    Hindi Translation of "REPRESENTATION" | The official Collins English-Hindi Dictionary online. Over 100,000 Hindi translations of English words and phrases.

  23. REPRESENT in Hindi

    REPRESENT translate: (औपचारिक रूप से किसी का) प्रतिनिधित्व करना, (किसी ...

  24. Representation of internal speech by single neurons in human ...

    Representation for all words was observed in each phase, including pseudowords (bindip and nifzig) (Fig. 3c,f). To identify neurons with selective activity for unique words, we performed a Kruskal ...

  25. Epidural Spinal Cord Recordings (ESRs): Sources of Artifact in

    Introduction: Evoked compound action potentials (ECAPs) measured using epidural spinal recordings (ESRs) during epidural spinal cord stimulation (SCS) can help elucidate fundamental mechanisms for the treatment of pain, as well as inform closed-loop control of SCS. Previous studies have used ECAPs to characterize the neural response to various neuromodulation therapies and have demonstrated ...