This is an internal test page for all the functionality I support on this site. If you landed here by mistake, head over to the home page.
Basic Typography
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
This is a paragraph with bold text, italic text, and bold italic text. You can also use underscores for italics and bold italics.
Here’s an external link and an internal link.
Lists
Unordered Lists
- Item 1
- Item 2
- Nested item 2.1
- Nested item 2.2
- Item 3
Ordered Lists
- First item
- Second item
- Nested item 2.1
- Nested item 2.2
- Third item
Task Lists
- Completed task
- Incomplete task
- Another completed task
Blockquotes
This is a simple blockquote
This is a multi-paragraph blockquote
With multiple lines
And nested quotes
Code
Inline Code
Here’s some inline code
within a paragraph.
Code Blocks with Syntax Highlighting
def hello_world():
print("Hello, World!")
# This is a Python comment
for i in range(5):
hello_world()
function calculateSum(a, b) {
// This is a JavaScript comment
return a + b;
}
const result = calculateSum(5, 3);
console.log(result);
/* CSS styling */
.container {
display: flex;
margin: 0 auto;
max-width: 1200px;
}
#header {
background-color: #f0f0f0;
padding: 20px;
}
Some real-world code that’s long enough to rap:
# Training loop
for epoch in range(n_epochs):
self._elapsed_epochs += 1
for i, X in enumerate(train_data):
if i > 2 and loss.isnan():
print("Loss is NaN. Early stopping.")
return self
self._elapsed_batches += 1
real_X = Variable(X.type(Tensor))
agg_loss = torch.Tensor([0]).to(self.device)
# Diffusion process with cosine noise schedule
for t in range(self.diffusion_steps):
self._eps = self.privacy_engine.get_epsilon(self._delta)
if self._eps >= self.epsilon_target:
print(f"Privacy budget reached in epoch {epoch} (batch {i}, {t=}).")
return self
beta_t = get_beta(t, self.diffusion_steps)
noise = torch.randn_like(real_X).to(self.device) * np.sqrt(beta_t)
noised_data = real_X + noise
if self.pred_noise:
# Use the model as a diffusion noise predictor
predicted_noise = self.model(noised_data)
# Calculate loss between predicted and actualy noise using MSE
numeric_loss = mse_loss(predicted_noise, noise)
categorical_loss = torch.tensor(0.0)
loss = numeric_loss
else:
# Use the model as a mixed-type denoiser
denoised_data = self.model(noised_data)
# Calculate numeric loss using MSE
numeric_loss = mse_loss(
denoised_data[:, :categorical_start_idx],
real_X[:, :categorical_start_idx],
)
# Convert categoricals to log-space (to avoid underflow issue) and calculate KL loss for each original feature
_idx = categorical_start_idx
categorical_losses = []
for _col, _cat_len in self.category_counts.items():
categorical_losses.append(
kl_loss(
torch.log(denoised_data[:, _idx : _idx + _cat_len]),
real_X[:, _idx : _idx + _cat_len],
)
)
_idx += _cat_len
# Average categorical losses over total number of categories across all categorical features
categorical_loss = (
sum(categorical_losses) / self.total_categories
if categorical_losses
else 0
)
loss = numeric_loss + categorical_loss
# Add losses from each diffusion step
agg_loss += loss
# Average loss over diffusion steps
loss = agg_loss / self.diffusion_steps
print(f"Batches: {self._elapsed_batches}, {agg_loss=}")
# Backward propagation and optimization step
self.optim.zero_grad()
loss.backward()
self.optim.step()
return self
Tables
Header 1 | Header 2 | Header 3 |
---|---|---|
Row 1, Col 1 | Row 1, Col 2 | Row 1, Col 3 |
Row 2, Col 1 | Row 2, Col 2 | Row 2, Col 3 |
Left-aligned | Center-aligned | Right-aligned |
:— | :—: | —: |
Horizontal Rules
Images
Footnotes
Here’s a sentence with a footnote1.
Definition Lists
- Term 1
- Definition 1
- Term 2
- Definition 2a
- Definition 2b
Abbreviations
The HTML specification is maintained by the W3C.
*[HTML]: Hyper Text Markup Language *[W3C]: World Wide Web Consortium
Math (if supported)
Inline math: $E = mc^2$
Block math:
$$ \frac{n!}{k!(n-k)!} = \binom{n}{k} $$Emoji
:smile: :heart: :thumbsup:
Custom HTML in rawhtml
This is custom HTML within markdown.
- Item 1
- Item 2
iFrame embed
This is the footnote content. ↩︎