Select Language

SOPG: Search-Based Ordered Password Generation for Autoregressive Neural Networks

Analysis of SOPG, a novel method for generating passwords in descending probability order using autoregressive neural networks, significantly improving password guessing efficiency.
strongpassword.org | PDF Size: 0.5 MB
Ukadiriaji: 4.5/5
Ukadiriaji Wako
Tayari umekadiria hati hii
PDF Document Cover - SOPG: Search-Based Ordered Password Generation for Autoregressive Neural Networks

1. Gabatarwa

Nenosiri bado ndio njia inayotumika zaidi kwa uthibitishaji wa mtumiaji, ikilinda usawa kati ya urahisi na ufanisi. Hata hivyo, usalama wake unakabiliwa kila wakati na mashambulizi ya kukisia nenosiri, sehemu muhimu katika uchunguzi wa usalama wa kushambulia na tathmini ya nguvu ya kujihami. Njia za jadi, kutoka kwa orodha ya kanuni hadi mifano ya takwimu kama vile minyororo ya Markov na PCFG, zina mipaka ya asili katika utofauti na ufanisi. Kuja kwa ujifunzaji wa kina, hasa mitandao ya neva ya kujirejesha, kuliahidi mabadiliko ya kawaida. Hata hivyo, uangalizi muhimu uliendelea: njia ya uzalishaji yenyewe. Mbinu za kawaida za sampuli zinaleta bahati nasibu, na kutoa nenosiri zinazorudiwa na matokeo yasiyo na mpangilio, na hivyo kuzuia ufanisi wa shambulio. Karatasi hii inatangaza SOPG (Search-Based Ordered Password Generation), a novel method that compels autoregressive models to generate passwords in approximately descending order of probability, thereby revolutionizing the efficiency of neural network-based password guessing.

2. Background & Related Work

2.1 Juyin Halitta na Zato Kalmar Sirri

The field has evolved through distinct phases: Heuristic Rule-Based methods relied on manual dictionaries and transformation rules (e.g., John the Ripper rules), which were experience-dependent and lacked theoretical grounding. The proliferation of real password leaks post-2009 enabled Statistical Methods. The Markov model, as used in OMEN, yana kuma tsinkaya harafin gaba bisa tarihin tsari mai tsari, yayin da Probabilistic Context-Free Grammar (PCFG) ke raba kalmomin sirri zuwa tsari (alpha, lamba, alama) kuma ya koyi yuwuwarsu. Duk da cewa na tsari ne, waɗannan samfuran sau da yawa suna yin kuskure da yawa kuma suna fuskantar wahalar haɗawa.

2.2 Hanyoyin Cibiyar Sadarwar Jijiya

Samfuran koyo mai zurfi, masu iya koyon rarraba mai rikitarwa, masu girma, sun bayyana a matsayin magada masu ƙarfi. PassGAN ya yi amfani da Cibiyoyin Sadarwar Jijiya Masu Haɗaɗɗiya (GANs) don samar da kalmomin sirri, ko da yake GANs sanannen rashin kwanciyar hankali ne ga bayanan daban-daban. VAEPass applied Variational Autoencoders. The most recent and relevant approach is PassGPT, which leverages the GPT (Generative Pre-trained Transformer) architecture, an autoregressive model that predicts the next token given all previous ones. However, all these models typically rely on standard sampling (e.g., random sampling, top-k, nucleus sampling) during generation, which does not guarantee order or uniqueness.

3. Hanyar SOPG

3.1 Babban Ra'ayi

SOPG addresses the fundamental inefficiency of random sampling. Instead of generating passwords stochastically, it frames password generation as a search problem. Manufar da burin shiga cikin sararin samaniya mai yawa na yuwuwar kalmomin shiga (wanda aka ayyana ta hanyar ƙamus na samfurin da matsakaicin tsayi) a cikin tsari wanda ya kusanci raguwar yuwuwar, kamar yadda aka sanya ta hanyar cibiyar sadarwar jijiya ta atomatik.

3.2 Search Algorithm

Yayin da taƙaitaccen PDF bai ba da cikakkun bayanai game da takamaiman algorithm ba, SOPG yana iya amfani da ko daidaita mafi kyawun bincike na farko ko dabarar binciken katako wanda aka jagoranta ta hanyar ƙididdiga na yuwuwar samfurin. Ana wakiltar kalmar shiga ta hanyar jerin alamomi. Binciken yana kiyaye jerin ginshiƙai (misali, tulin) na ɓangarori ko cikakkun jerin, wanda aka jera ta hanyar tarin yuwuwarsu ko maki na dabara da aka samo daga gare ta. A kowane mataki, ana faɗaɗa ɗan takara mafi kyau ta hanyar haɗa yuwuwar alamomi na gaba (daga ƙamus), kuma ana ƙididdige sababbin ɗan takara kuma a saka su cikin jerin ginshiƙai. Wannan yana tabbatar da cewa rafin fitarwa yana da tsari kusan daga mafi yuwuwa zuwa mafi ƙarancin yuwuwa.

3.3 SOPGesGPT Model

Marubutan sun fara aiwatar da hanyarsu ta hanyar gina SOPGesGPT, samfurin zato na kalmar shiga wanda ya dogara da tsarin GPT. An horar da samfurin akan bayanan kalmomin shiga da aka sace don koyon rarraba tushen. Mahimmancin, yayin lokacin samarwa, yana amfani da algorithm na SOPG maimakon samfurin daidaitaccen, yana mai da shi abin hawa don nuna fifikon SOPG.

4. Technical Details & Mathematical Formulation

Given an autoregressive model (like GPT), the probability of a password sequence $S = (s_1, s_2, ..., s_T)$ is factorized as:

Standard random sampling draws $s_t$ from this distribution, leading to a random walk. SOPG, in contrast, aims to find the sequence $S^*$ that maximizes $P(S)$ or systematically enumerates high-probability sequences. This can be viewed as:

5. Experimental Results & Analysis

Coverage Rate (SOPGesGPT)

35.06%

Top coverage achieved in one-site test.

Improvement over PassGPT

81%

Higher cover rate than the latest model.

Improvement over PassGAN

421%

Massive gain over GAN-based approach.

5.1 Kwatanta da Samfurin Bazuwar

The paper first validates SOPG's core efficiency claim against standard random sampling on the same underlying model. Key Findings:

  • Zero Duplicates: SOPG generates a unique, ordered list, eliminating the waste of computational resources on duplicate guesses.
  • Fewer Inferences for Same Coverage: Don yin daidai girmama (kashi na kalmomin sirri da aka fasa daga gwajin gwaji), SOPG yana buƙatar ƙananan ƙididdiga na ƙirar (wucewa gaba) idan aka kwatanta da samfurin bazuwar.
  • Ƙananan Jimlar Zato: Saboda haka, SOPG yana karya adadin kalmomin sirri iri ɗaya ta hanyar samar da ƙaramin jerin zato, yana fassara kai tsaye zuwa lokutan harin da suka fi sauri.
Wannan gwaji ya tabbatar cewa hanyar samarwa babban cikas ce, kuma SOPG yana kawar da ita yadda ya kamata.

5.2 Ma'auni da Mafi Kyawun Fasaha

An kwatanta SOPGesGPT a cikin gwajin wuri guda tare da manyan ma'auni: OMEN (Markov), FLA, PassGAN (GAN), VAEPass (VAE), da sabon PassGPT (GPT tare da samfurin bazuwar).

  • Cover Rate: SOPGesGPT ya sami 35.06% cover rate. The improvements are staggering: 254% over OMEN, 298% over FLA, 421% over PassGAN, 380% over VAEPass, and 81% over PassGPT.
  • Effective Rate: The paper also mentions leading in "effective rate," likely referring to the number of unique valid passwords generated per unit of time or computation, further underscoring SOPG's efficiency.
Chart Description: A bar chart would show "Cover Rate (%)" on the Y-axis and model names on the X-axis. SOPGesGPT's bar would be dramatically taller than all others, with PassGPT in second place but significantly lower. A line overlay could show the number of guesses required to reach 20% coverage, where SOPGesGPT's line would rise steeply early on, demonstrating its "hit hard and fast" capability.

6. Analysis Framework & Case Example

Framework: Password Guessing Efficiency Quadrant
We can analyze models on two axes: Model Capacity (ability to learn complex distributions, e.g., GPT > Markov) and Generation Efficiency (optimal ordering of outputs).

  • Quadrant I (High Capacity, Low Efficiency): PassGPT, VAEPass. Ƙarfin samfurori an takura su da bazuwar samfurori.
  • Kwata na II (Babban Ƙarfi, Babban Aiki): SOPGesGPT. Matsayin da aka cimma ta wannan aikin.
  • Kwata na III (Ƙaramin Ƙarfi, Ƙaramin Aiki): Ƙa'idodin kai hari na asali.
  • Kwata na IV (Ƙaramin Ƙarfi, Babban Aiki): OMEN, FLA. Samuwarsu a zahiri tana da tsari (ta hanyar yuwuwar) amma ƙarfin samfurinsu yana iyakance iyakar aikin ƙarshe.
Misalin Shari'ar da ba Code ba: Ka yi tunatar da mafarautan taska guda biyu (masu kai hari) tare da taswira mai inganci iri ɗaya (samfurin GPT da aka horar). Mai farauta ɗaya (Samfurin Bazuwar) yana tafiya ba da gangan ba, sau da yawa yana sake ziyartar wurare, yana samun taska a hankali. ɗayan mai farauta (SOPG) yana da na'urar gano ƙarfe wacce ke nuna wuri mafi kyawu na kusa da farko, yana bin tsari, hanyar da ba ta maimaitawa ba. Don adadin matakai iri ɗaya, mai farautan SOPG ya sami taska da yawa sosai. SOPG ita ce waccan na'urar gano ƙarfe don taswirar hanyar sadarwa ta jijiyoyi.

7. Application Outlook & Future Directions

Immediate Applications:

  • Proactive Password Strength Evaluation: Kamfanonin tsaro na iya amfani da kayan aikin da ke da ƙarfin SOPG don bincika manufofin kalmar sirri ta hanyar samar da mafi yawan yiwuwar zargin harin cikin sauri mai yawa, suna ba da kimantawar haɗari na gaske.
  • Digital Forensics & Lawful Recovery: Haɓaka dawo da kalmar sirri a cikin binciken shari'a inda lokaci yake da mahimmanci.
Hanyoyin Bincike na Gaba:
  • Dabarun Bincike na Haɗe-haɗe: Haɗa SOPG tare da ƙayyadaddun bazuwar don bincika ƙananan hasashe na "ƙirƙira" masu yuwuwar amfani da farko, daidaita amfani da bincike.
  • Bincike Mai Sauƙaƙa ta Kayan Aiki: Aiwar daɗaɗɗen bincike akan GPUs/TPUs don daidaita kimanta ɗan takara, rage nauyin tsarin binciken da kansa.
  • Bayan Kalmomin Sirri: Aiwatar da tsarin samarwa mai tsari ga wasu ayyukan samfurin kai-da-kai inda fitarwa mai tsari, na musamman ke da daraja, kamar samar da gwaje-gwajen gwaji don software, ko ƙirƙirar bambance-bambancen ƙira bisa tsarin yiwuwa.
  • Matakan Kariya: Binciken yadda ake gano da kuma karewa daga irin wadannan ingantattun hare-hare masu tsari, mai yiyuwa ta hanyar nazarin "alamar yatsa" na jerin zato da SOPG ta samar sabanin na bazuwar.

8. References

  1. M. Jin, J. Ye, R. Shen, H. Lu, "Search-based Ordered Password Generation of Autoregressive Neural Networks," Manuscript Submitted for Publication.
  2. A. Narayanan and V. Shmatikov, "Fast dictionary attacks on passwords using time-space tradeoff," in Proceedings of the 12th ACM conference on Computer and communications security, 2005.
  3. M. Weir, S. Aggarwal, B. de Medeiros, and B. Glodek, "Password cracking using probabilistic context-free grammars," in 2009 30th IEEE Symposium on Security and Privacy, 2009.
  4. J. Ma, W. Yang, M. Luo, and N. Li, "A study of probabilistic password models," in 2014 IEEE Symposium on Security and Privacy, 2014.
  5. B. Hitaj, P. Gasti, G. Ateniese, and F. Perez-Cruz, "PassGAN: A Deep Learning Approach for Password Guessing," in Applied Cryptography and Network Security Workshops, 2019.
  6. OpenAI, "Improving Language Understanding by Generative Pre-Training," 2018. [Online]. Available: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
  7. M. Pasquini, D. Bernardo, and G. Ateniese, "PassGPT: Password Modeling and (Guessing) with Large Language Models," in arXiv preprint arXiv:2306.01745, 2023.

9. Original Analysis & Expert Commentary

Core Insight

The paper's breakthrough isn't a new neural architecture; it's a surgical strike on the generation bottleneckFor years, the password guessing community, mirroring trends in generative AI, obsessed over model capacity—bigger transformers, better GANs—while treating the sampling process as a solved, secondary problem. Jin et al. correctly identify this as a critical fallacy. Random sampling from a powerful model is like using a precision sniper rifle to spray bullets randomly; SOPG adds the scope and the strategy. This shift in focus from modeling to search is the paper's most significant conceptual contribution. It demonstrates that in security applications where output order directly maps to success rate (cracking the easiest passwords first), search efficiency can outweigh marginal gains in model fidelity.

Logical Flow

The argument is compelling and well-structured: (1) Establish the importance and inefficiency of current neural guessing (random, duplicate-ridden). (2) Propose SOPG as a search-based solution to enforce probability-ordered, unique generation. (3) Empirically prove SOPG's efficiency over random sampling on the same model—a clean ablation study. (4) Showcase the end-to-end superiority by building SOPGesGPT and demolishing existing benchmarks. The 81% improvement over PassGPT is particularly telling; it isolates the value of SOPG by comparing the same GPT architecture with two different generation schemes.

Strengths & Flaws

Strengths: The core idea is elegant and high-impact. The experimental design is robust, with clear, decisive results. The performance gains are not incremental; they are transformative, suggesting SOPG could become a new standard component. The work connects deeply with search algorithms from classical AI, applying them to a modern deep learning context—a fruitful cross-pollination.

Flaws & Open Questions: The PDF excerpt lacks crucial details: the specific search algorithm (A*, beam, best-first?) and its computational overheadBincike ba kyauta bane; kiyaye jerin fifiko da ƙididdige ƴan takara da yawa yana da tsada. Takardar ta yi iƙirarin "ƙarancin tunani," amma shin wannan ya ƙididdige tunanin cikin binciken? Ana buƙatar cikakken bincike na tsada da fa'ida. Bugu da ƙari, ma'anar "kusan jerin saukowa" ba ta da bayyananniya—yaya kusan? Shin jerin yana raguwa don dogayen sirri ko masu rikitarwa? Kwatancen, duk da cewa yana da ban sha'awa, shine "gwajin shafin guda ɗaya." Ana buƙatar tabbatar da gamawa a cikin tarin bayanai daban-daban (sirrin kamfani da na kafofin sada zumunta). A ƙarshe, kamar yadda duk ci gaban harin ke da haɗari, yana da haɗarin zama fasaha mai amfani biyu, yana ƙarfafa masu mugunta da masu kariya.

Hanyoyin Aiki Masu Amfani

Don Masu Aikin Tsaro: Nan da nan ku gwada tsananin sirrin ƙungiyarku da hanyoyin kamar SOPG, ba kawai tsofaffin samfuran Markov ko GAN ba. Sabunta ƙididdigar ƙarfin sirri don yin la'akari da wannan sabon tsarin ingantaccen harin mai tsari.

Don Masu Bincike na AI/ML: Wannan kira ne mai bayyanawa don sake duba dabarun samarwa a cikin samfuran autoregressive don ayyuka masu manufa. Kar ku mai da hankali kawai akan lanƙwan asara; bincika ingancin hanyar tunaniBincika hanyoyin haɗakarwa na neuro-symbolic inda ƙirar da aka koya ke jagorantar bincike na gargajiya.

Don Vendors & Policymakers: Haɓaka motsi fiye da kalmomin sirri. SOPG yana sa hare-haren ƙamus su yi aiki sosai har ma kalmomin sirri masu matsakaicin rikitarwa suna cikin haɗari mafi girma. Zuba jari kuma ka tilasta MFA mai jure wa zamba (kamar FIDO2/WebAuthn) a matsayin hanyar tantancewa ta farko. Don tsarin kalmomin sirri na gargajiya, aiwatar da ƙayyadaddun iyaka na ƙima da gano abin da ba na al'ada ba wanda aka daidaita don gano tsarin harin da aka tsara, mai sauri.

A ƙarshe, wannan takarda ba kawai ta ci gaba da zato kalmomin sirri ba; tana ba da babbar darasi kan yadda inganta mataki na ƙarshe na hanyar AI—dabarun samarwa—zai iya haifar da riba mafi girma a cikin aiki na zahiri fiye da haɓaka ƙirar kanta har abada. Darasi ne na ingantaccen amfani da AI wanda ya shafi fiye da tsaro na sirri.