Wikipedia:Articles for deletion/Kolmogorov–Arnold Network

From Wikipedia, the free encyclopedia
The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.

The result was delete‎. Doczilla Ohhhhhh, no! 07:04, 18 May 2024 (UTC)[reply]

Kolmogorov–Arnold Network[edit]

Kolmogorov–Arnold Network (edit | talk | history | protect | delete | links | watch | logs | views) – (View log | edits since nomination)
(Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL)

This wikipage is about a preprint that came out a week ago. It's generated some hype on webforums, but that's an extremely unreliable barometer of notability. Gumshoe2 (talk) 13:57, 10 May 2024 (UTC)[reply]

I believe the appropriate extent to include this as wiki material is limited to the following two sentences on Kolmogorov-Arnold representation theorem, as presently:
In the field of machine learning, there have been various attempts to use neural networks modeled on the Kolmogorov–Arnold representation. In these works, the Kolmogorov–Arnold theorem plays a role analogous to that of the universal approximation theorem in the study of multilayer perceptrons.
It doesn't seem to be the case that any particular attempt is very notable. Gumshoe2 (talk) 14:17, 10 May 2024 (UTC)[reply]

Although less important than the issue of whether notability is established in reliable sources, I'd like to highlight that a main part of the preprint's self-reported notability is reflected in the wiki-statement "KANs have been shown to perform well on problems from knot theory and physics (such as Anderson localization)." This statement is extremely dubious. I'd encourage any mathematician to look at Table 5 on page 24 of the preprint or Table 6 on page 28. The KAN-discovered formulas are, in effect, nothing but classical regression with complicated functions. It has been possible to discover similarly complicated formulas for well over a century, and they aren't of any self-apparent interest whatsoever. The stark difference with the "Theory" or "Human"-discovered formulas should be apparent to even non-mathematicians.

The other examples in the paper are of (extremely) small toy data sets, nowhere close the scale at which machine learning is uniquely useful. As always, possibly papers in the future will develop this topic further, but at present it isn't remotely clear that this preprint is a significant development. Gumshoe2 (talk) 16:01, 10 May 2024 (UTC)[reply]

The article clearly situates the KAN as a recent addition to the long history of attempting to apply KART in a machine learning context. Given the standing of the researchers involved (e.g. Ziming Liu, Jim Halverson, Max Tegmark), this is more than just a random arXiv preprint and I don't see any benefit to Wikipedia in deleting this information until it gets formally published somewhere in a year or two, no matter whether we personally find the paper's content convincing or important. calr (talk) 09:42, 11 May 2024 (UTC)[reply]

I'm sorry, it's absurd to suggest that these are particularly noteworthy researchers. For example, just the five most recent papers on machine learning on arxiv (the first 5 of the 95 uploaded yesterday) are authored by Mehryar Mohri, Yu-Pin Hsu, Pawel Herman, Vaneet Aggarwal, and Lalitha Sankar. If you judge by author notability and if Ziming Liu and Jim Halveson meet your standard, then it seems that nearly every new preprint on machine learning is something more than just a random arxiv preprint.
It's true that Max Tegmark is somewhat famous for non-research work like Our Mathematical Universe and Life 3.0 and for various public advocacy. (At least in his former life as a physicist, he was often criticized for unscientific babble, see e.g. the criticism section in Our Mathematical Universe.)
And even if the authors were top machine learning researchers, that wouldn't make any random new paper of theirs significant. Likewise it also won't be enough for this preprint to just be formally published. It has to be recognized as significant by reliable sources. Gumshoe2 (talk) 16:23, 11 May 2024 (UTC)[reply]
"If we allow this article then we'll also have to allow many other articles" isn't really an argument for non-notability. calr (talk) 22:46, 11 May 2024 (UTC)[reply]

That's a bizarre description of (that part of) what I'm saying, which is that the word "notable" loses all meaning if just about every preprint is notable. I am suggesting that your usage of the word is not even cogent.

Here's equally (or much more) notable authors from preprints #5-10 uploaded yesterday: Hao Li, Andreas Krause, Djamila Aouada, Dan Klein, Stefano Savazzi. So all ten of the most recent preprints on machine learning uploaded to arxiv are clearly 'more than just a random arXiv preprint' by your standard. Should I go through all 95 uploaded yesterday? Gumshoe2 (talk) 23:54, 11 May 2024 (UTC)[reply]

You haven't raised anything for me to cogently respond to. You argument seems to be 1) you've personally reviewed the paper and didn't find it notable, 2) the article's title comes from a preprint, and some preprints aren't notable, so the concept isn't notable either (and even when it does appear in a journal, that still doesn't count unless some other source also says so), 3) vague insinuations about "hype on webforums". None of those are relevant to Wikipedia's definition of notability. calr (talk) 16:38, 12 May 2024 (UTC) (Clarified calr (talk) 16:58, 12 May 2024 (UTC) )[reply]
"the article's title comes from a preprint"
This framing seems disingenuous; everything except for five sentences in the History section comes from this new preprint. Those sentences belong naturally in the page Kolmogorov-Arnold representation theorem. Without much loss, they are even well represented by the sentence presently there, with two of the references included: "In the field of machine learning, there have been various attempts to use neural networks modeled on the Kolmogorov–Arnold representation." Gumshoe2 (talk) 17:42, 12 May 2024 (UTC)[reply]
  • Delete A preprint from last month is not a suitable basis for an encyclopedia article. Bulking up a page about a new proposal with "background" references that don't specifically discuss the new proposal is the wrong way to go about writing anything encyclopedic. Adopting the terminology proposed by an unreliable source, and taking that choice of terminology as so definitive that it establishes the article's title, violates NPOV. XOR'easter (talk) 18:01, 14 May 2024 (UTC)[reply]
  • Delete Agreed, that a preprint from last month is not a suitable basis for an encyclopedia article. If something is here in two or three months, there will be another article. Especially in this technical space, we should be cautious that WP is not used to pump a company. RayKiddy (talk) 16:57, 15 May 2024 (UTC)[reply]
    In this case I'm not aware of any relevant corporate interests. However, I think wiki should generally tread very carefully on machine learning/AI topics, since many outwardly reliable sources in the field fail to provide any critical perspective - often (indirectly) because of corporate interests, but also simply because of lax standards within the field. For example, it's easy to find reliable sources saying that AlphaZero didn't use any knowledge about chess or Go except the rules, since this is how Google advertised their work - even though if you read the primary source material this is seen as, at best, an exaggeration. In many such cases it's hard if not impossible to find reliable sources taking a critical perspective, which is obviously unfortunate for wiki.
    But at least for this wikipage, that's a moot point for now, since there are no reliable secondary sources on this preprint whatsoever. Gumshoe2 (talk) 17:39, 15 May 2024 (UTC)[reply]
    That's right, I made 16,000 edits over 20 years as part of a corporate conspiracy to, umm, write one article about a niche topic in machine learning and "pump the stock" of a PhD thesis topic. Well done on cracking the case, your username is truly justified! calr (talk) 10:20, 18 May 2024 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.