Quantcast
Channel: Questions in topic: "indexes.conf"
Viewing all 236 articles
Browse latest View live

How to find the list of indexes and source types in specific app?

$
0
0
I have a different kind of access called ELEVATED ACCESS in splunk enterprise which is below the POWER USER but higher than the USER, with different apps installed. I have only one app in that. Is there a way to identify the list of available indexes and source types that is used in my app?

Index buckets configuration using time

$
0
0
Hello, dear ninjas! I need to configure my indexes to store data in bucket using time periods. For example: Index - Test Hot/warm buckets have to store data for 60 days then move it to cold buckets Cold buckets should store data for 120 days (+60 from warm buckets) = 180 days then move outdated data to Frozen Frozen have to store it 180 days (+ 180 days from cold buckets) and after 360 days delete the outdated data. I didn't find options in default indexes.conf for that. Also should I write a script which will move data from cold to frozen? Doesn't Splunk do it automatically? Reference * If you do not specify a 'coldToFrozenScript', data is deleted when rolled to frozen. (https://docs.splunk.com/Documentation/Splunk/7.3.2/Admin/Indexesconf) Thank you!

Remove data after moving index location

$
0
0
I just moved my homePath and coldPath to a new location, and wanted to delete the data stored on Splunk's default index location ($SPLUNK_DB). I would leave it, but it's using the bulk of that partition. Can I simply delete these files, or will they fall off from the relocate?

Why is coldPath.maxDataSizeMB taking precedence and growing until parameter is reached?

$
0
0
I have the following configuration for an index extracted by using btool: /opt/splunk/etc/system/local/indexes.conf coldPath.maxDataSizeMB = 1843200 /opt/splunk/etc/system/local/indexes.conf maxTotalDataSizeMB = 500000 But the coldPath doesn't care about **maxTotalDataSizeMB** parameter and it gets bigger and bigger: 950G ./colddb 36G ./frozendb 74M ./datamodel_summary 4.0K ./thaweddb 985G . So I checked how other indexes in my production environment were configured. For other indexes, I have the following configuration: /opt/splunk/etc/system/local/indexes.conf coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/local/indexes.conf maxTotalDataSizeMB = 500000 And for these cases the rotation works propertly. So my opinion is that **coldPath.maxDataSizeMB** has precedence over **maxTotalDataSizeMB** and even when **maxTotalDataSizeMB** is reached it keeps growing until **coldPath.maxDataSizeMB** parameter is reached. I hope someone can help me to understand this situation because I'm suffering a huge problem in my environment. Thank you in advance.

Duplicate index stanza in indexes.conf in a clustered environment

$
0
0
Hello Splunkers, I have an indexes.conf file where i have a duplicate index stanza. If i remove one of them will it impact anything? Below is my duplicate stanza. I will probably remove top one if there's no impact. [custom] homePath = volume:hot/custom/db coldPath = volume:cold/custom/colddb thawedPath = $SPLUNK_HOT_DB/custom/thaweddb maxDataSize = auto_high_volume frozenTimePeriodInSecs=31557600 maxTotalDataSizeMB = 500000 homePath.maxDataSizeMB = 66800 repFactor = auto [custom] homePath = volume:hot/custom/db coldPath = volume:cold/custom/colddb thawedPath = $SPLUNK_HOT_DB/custom/thaweddb maxDataSize = auto_high_volume frozenTimePeriodInSecs=31557600 maxTotalDataSizeMB = 500000 homePath.maxDataSizeMB = 66800 enableTsidxReduction = true timePeriodInSecBeforeTsidxReduction = 15552000 repFactor = auto Thanks!

Deploy indexes.conf in a Search Head Cluster? How to avoid (and recover in case of) misconfiguration?

$
0
0
We have a Search Head Cluster connected to an Indexer Cluster. All indexes are on the clustered Indexers, and the Search Head Cluster members forward their local internal indexes to the Indexers. Is it best practice to still deploy a copy of the "master" indexes.conf (that gets distributed to the Indexers through the Cluster Master) to the Search Head Cluster members? If so, how? And much more importantly: How do we recover from misconfigurations that stop the Search Head Cluster members from restarting correctly? Scenario: we use the Deployer to deploy a version of indexes.conf that contains a reference to a volume e.g. in homePath that is not defined on the Search Head Cluster members. The Search Head Cluster members will initiate a rolling-restart but not come back online as Splunkd will notice that there are incorrectly defined indexes on the instance. How can we a) avoid this happening and b) if it happens, quickly revert?

Volume configuration will not manage space used by this index

$
0
0
We recently upgraded from 7.2.1 to 7.3.3 and from the `_internal` logs I can see that these new warnings are showing up across my indexer cluster. What is it saying and how do I go about fixing this? I've noticed that now I have an indexer just randomly lock up once a week. Any insight would be appreciated! 01-14-2020 00:58:27.994 +0000 WARN ProcessTracker - (child_17__Fsck) IndexConfig - idx=_introspection Path coldPath='/opt/splunk/var/lib/splunk/_introspection/colddb' (realpath '/mnt/local/hot/_introspection/colddb') is inside volume=primary (path='/mnt/local/hot', realpath='/mnt/local/hot'), but does not reference that volume. Space used by coldPath will *not* be volume-mananged. Please check indexes.conf for configuration errors. 01-14-2020 00:58:27.995 +0000 WARN ProcessTracker - (child_17__Fsck) IndexConfig - idx=_telemetry Path coldPath='/opt/splunk/var/lib/splunk/_telemetry/colddb' (realpath '/mnt/local/hot/_telemetry/colddb') is inside volume=primary (path='/mnt/local/hot', realpath='/mnt/local/hot'), but does not reference that volume. Space used by coldPath will *not* be volume-mananged. Please check indexes.conf for configuration errors. 01-14-2020 00:58:28.008 +0000 WARN ProcessTracker - (child_17__Fsck) IndexConfig - idx=firedalerts Path coldPath='/opt/splunk/var/lib/splunk/firedalerts/colddb' (realpath '/mnt/local/hot/firedalerts/colddb') is inside volume=primary (path='/mnt/local/hot', realpath='/mnt/local/hot'), but does not reference that volume. Space used by coldPath will *not* be volume-mananged. Please check indexes.conf for configuration errors. 01-14-2020 00:58:28.042 +0000 WARN ProcessTracker - (child_17__Fsck) IndexConfig - idx=wineventlog Path homePath='/opt/splunk/var/lib/splunk/wineventlog/db' (realpath '/mnt/local/hot/wineventlog/db') is inside volume=primary (path='/mnt/local/hot', realpath='/mnt/local/hot'), but does not reference that volume. Space used by homePath will *not* be volume-mananged. Please check indexes.conf for configuration errors. **indexes.conf** # global settings [default] lastChanceIndex = lastchance malformedEventIndex = malformedevent [volume:primary] path = /mnt/local/hot maxVolumeDataSizeMB = 14000000 [volume:cold] path = /mnt/local/cold maxVolumeDataSizeMB = 58200000 [volume:_splunk_summaries] path = /mnt/local/hot maxVolumeDataSizeMB = 1000000 homePath = volume:primary/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /mnt/local/cold/$_index_name/thaweddb homePath.maxDataSizeMB = 2000000 maxWarmDBCount = 250 maxDataSize = auto enableDataIntegrityControl = true frozenTimePeriodInSecs = 188697600 [main] homePath = volume:primary/defaultdb/db coldPath = volume:cold/defaultdb/colddb coldToFrozenDir = /mnt/local/cold/frozen/defaultdb thawedPath = /mnt/local/cold/defaultdb/thaweddb maxDataSize = auto_high_volume frozenTimePeriodInSecs = 31536000 ... ... ...

What's the best strategy for volume tags when indexers have different number and size of devices?

$
0
0
I have a large index cluster with bare metal machines that have different hardware configurations. The number of SDD's, their size, and performance specs differ across the indexers. So what is the best way to use volume tags to abstract these details from the indexes? My thought is to start with a "hot_warm" volume tag, like the example in the indexes.conf spec, that would be defined in $SPLUNK_HOME/etc/syste/local/indexes.conf and would point to the indexers fastest device of a moderate size. But that leaves me with an variable array of devices for the "cold#" volume tags. Is there a way to add a list of devices under one volume tag? Otherwise the $SPLUNK_HOME/etc/master-apps/_cluster/local/indexes.conf file, managed by the index master, will not address all the devices on the indexers. Any advice is welcome. Thanks.

Setting up indexes.conf

$
0
0
Hi, I am setting indexes.conf file where I am going to fix homepath and coldpah sizes. for ex.- [myindex] homePath = FASTDISK:\splunk\myindex\db coldPath = SLOWDISK:\splunk\myindex\colddb thawedPath = SLOWDISK:\splunk\myindex\thaweddb coldToFrozenDir = ARCHIVEDISK:\splunk\myindex\frozen **maxTotalDataSizeMB = 1000000** homePath.maxDataSizeMB = 600000 coldPath.maxDataSizeMB = 400000 frozenTimePeriodInSecs = 31536000 Here I have below questions- 1. If I am setting separately homepath and coldpath sizes then why it is required to mention whole index size as well i.e. `maxTotalDataSizeMB` or it is not mandatory? 2. I read somewhere that for thawedPath does not take into consideration any environment variable/volume path referenced. is it correct? then I have to manually set thawedPath. Thanks,

indexes.conf sanity question.

$
0
0
I wanted to ask here before making this change, for just another set of eyes. Issue. We have /hot and /cold both with equal amounts of storage, with no difference between the storage speed on either volume. Currently data is rolling to cold at 90 days, and so cold is filling up and leaving hot about 20% full. I'd like to set the following to try and keep data in hot/warm for almost 1/2 of our global 13 month retention period. Do the following settings make sense? [default] #######retentions and hotwarm limits####### repFactor = auto #To balance disk space keep warm more warm buckets that the default 300. maxWarmDBCount = 3600 #Idle hot buckets roll to warm if no data is written to them in a day. maxHotIdleSecs = 86400 #Upper bound of timespan of hot/warm buckets, in seconds. maxHotSpanSecs = 15778476 #13 Months and data will roll to bitbucket unless a frozen directory is specified in their stanza. frozenTimePeriodInSecs = 34136000 #Data coming in on an unconfigured index will land in sandbox. lastChanceIndex = sandbox Thanks.

Splunk Indexes question

$
0
0
Hi, 1) I want to move my hot/warm bucket to cold after 90 days, is it possible to roll buckets based on time duration or only can roll volume based? Want to keep Hot and Warm for 90 days as i am using ssd for it and move it to cold in slow disk after that. Can this setting be applied maxHotSpanSecs = [90days] 2) Also i intend to keep hot/warm in same path and cold in another, below config is right for the same? Do i need to mention `volume:` in homepath too(my hot/warm buckets should be in `/opt/splunk/var/lib/splunk`)? 3) Also where should be accelelarated(tstats) be stored ideally [default] homePath = $SPLUNK_DB/$_index_name/db coldPath = volume:[cold]/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary Thanks in advance!

malformedEventIndex, how to troubleshoot and fix logs ending up here

$
0
0
Hello all, I created a malformedEventIndex ( `malformedevent`), per inputs.conf. I see 400 million+/day from thousands of hosts going to this index from my syslog servers (have a HF that sends to indexer cluster). I tried looking at the events to see what would cause them to end up in this index, and patterns shows: 100% __default_indexprocessor_body This doesn't tell me anything. I went through the reasons as to why events may end up here and none seem to match. * Routes the following kinds of events to the specified index: * events destined for read-only indexes ### we don't have these * log events destined for datatype=metric indexes ### no logs on syslog server go to metric indexes * log events with invalid raw data values, like all-whitespace raw ### I cat log files on syslog server and they are not all-whitespace * metric events destined for datatype=event indexes ### these systems are not sending metric events * metric events with invalid metric values, like non-numeric values ### see above * metric events lacking required attributes, like metric name ### see above Documentation on this index is extremely sparse so I am not sure where to go from here. Please help.

I cant understand the buckets segrigation in Indexes.conf

$
0
0
Question 1: In my org have Splunk ES 7.2.X with 4 VMs(win os) i.e., 1 Search Head, 1 Deployment server, 2 Indexers ***Search Head:*** In search head we installed **Splunk Add-on for Amazon Web Services** and configured and getting logs in splunk that logs are saving in index (main) search head under **defaultdb/db** and i didnt set the buckets retension policy. So can you please help me what is the exact indexes.conf to set the retension policy for deletion more than 1year logs. Question 2: I integrated some servers logs(haddop, mulesof, forgerock) to splunk these are indexing in index(main). When i look the indexes.conf file i was shocked there is no indexes.conf file anywhere. i have check some in my way i found _cluster/indexes.conf, in this saw the script like **[main] -> repfactor = 0** By seeing this i guess to know that this is cluster indexer so it have repfactor = 0. So can you please help me what is the exact indexes.conf to set the retension policy for deletion more than 1year logs in cluster indexer.

Bucket rotation and retention

$
0
0
Hi all, i'm here to ask you some information about a current setting i found on an existing Splunk Index. In particular, this is the indexes.conf stanza related to the index A: *[A] homePath = volume:primary/A/db coldPath = volume:secondary/A/colddb thawedPath = $SPLUNK_DB/A/thaweddb homePath.maxDataSizeMB = 15360 coldPath.maxDataSizeMB = 30720 maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 7776000 maxDataSize = auto coldToFrozenDir = /splunk/A/frozendb archiver.enableDataArchive = 0 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableDataIntegrityControl = 0 enableOnlineBucketRepair = 1 enableTsidxReduction = 0 maxTotalDataSizeMB = 102400 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 tsidxWritingLevel = enableDataIntegrityControl=true* After checking bucket information via monitoring console, i have the following question: **1) Why there is a hot bucket related to the index A with with startEpoch 16 december and endEpoch 31 Dec, with size on disk 375MB ?** It's related to the fact it does not hit neither size nor time (default maxhotspansec=90days) parameter to roll to warm? **2) if my requirement is to set 6 months of retention of this index, how can i be sure parameter frozenTimePeriodinSec act as expected?** **3) I was thinking to set maxHotSpanSecs to 1 day for hot to warm, but what about rolling from warm to cold in a way i does not create any kind of problem with conf modification on existing data?** Thanks in advance everyone.

Data Archiving and Retirement

$
0
0
I am trying to configure a new instance of splunk, my requirements for data retention are: Searchable 14 days Archive 5 years I have configured the indexes.conf as below for my index: coldtofrozendir = $SPLUNK_DB/defaultdb/frozendb frozentimeperiodinsecs = 1209600 According to the "Set a retirement and archiving policy" and "indexes.conf" documentation on splunk docs, the settings i've configured should roll the buckets to my frozen directory when the events are two weeks old and leave them there for me to handle. However - myself and the sales engineer are stumped as to why the events in the hot bucket are still over 3 months old. Have we read the documentation correctly? Your input is greatly appreciated. Thank you!

Is it possible to use SmartStore with a standalone docker installation?

$
0
0
Is it possible to use SmartStore with a standalone docker installation? I have been trying to set it up by specifying all my settings in the `indexes.conf` file. It works the first time, but when I destroy the docker container and spin up a new one, it will not read/write to the remote store.
Viewing all 236 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>